Forgot your password?

Comment: Boycotting this on principal. (Score 3, Informative) 109

by VortexCortex (#46830625) Attached to: Band Releases Album As Linux Kernel Module

The last time someone's music got into my kernel it was Sony with a rootkit. At least these folks are open about nabbing root.

They really screwed the pooch on this deal. Since their name is 'netcat', I'm waiting for the song to be released via telnet server as ANSI music. That way I can netcat the netcat album with my cross platform old school Codepage 437 + PC speaker enabled terminal emulator from GNU, Linux, BSD, OSX, iOS, Android, Windows, MSDOS or even DR-DOS. Maybe I'd buy in if the cover art was a sick scroller.

In all seriousness: Any FLOSS publicity is good publicity. Windows or Mac folks can run Linux in a VM to try out the audio; It's not my cup of tea, but sort of neat.

Comment: The true answer is yet to come: DHT-FS (Score 5, Interesting) 72

As user of both Bittorrent and Git and a creator of many "toy" operating systems which have such BT+Git features built in, I would like to inform you that I live in the future that you will someday share, and unfortunately you are wrong. From my vantage I can see that link rot was not ever, and is not now, acceptable. The architects of the Internet knew what they were doing, but the architects of the web were simply not up to the task of leveraging the Internet to its fullest. They were not fools, but they just didn't know then what we know now: Data silos are for dummies. Deduplication of resources is possible if we use info hashes to reference resources instead of URLs. Any number of directories AKA tag trees AKA human readable "hierarchical aliases" can be used for organization, but the data should always be stored and fetched by its unique content ID hash. This even solves hard drive journaling problems, and allows cached content to be pulled from any peer in the DHT having the resource. Such info hash links allows all your devices to always be synchronized. I can look back and see the early pressure pushing towards what the web will one day become -- Just look at ETags! Silly humans, you were so close...

Old resources shouldn't even need to be deleted if a distributed approach is taken. There is no reason to delete things, is there not already a sense that the web never forgets? With decentralized web storage everyone gets free co-location, essentially, and there are no more huge traffic bottlenecks on the way to information silos. Many online games have built-in downloader clients that already rely on decentralization. The latest cute cat video your neighbor notified you of will be pulled in from your neighbor's copy, of if they're offline, then the other peer that they got it from or shared it with, and so on up the DHT cache hierarchy all the way to the source if need be, thus greatly reducing ISP peering traffic. Combining a HMAC with the info hash of a resource allows secured pages to link to unsecured resources without worrying about their content being tampered with: Security that's cache friendly.
<img infohash="SHA-512:B64;2igK...42e==" hmac="SHA-512:SeSsiOn-ToKen, B64;X0o84...aP=="> <-- Look ma, no mixed content warnings! -->

Instead of a file containing data, consider the names merely human readable pointers into a distributed data repository. For dynamism and updates to work, simply update the named link's source data infohash. This way multiple sites can be using the same data with different names (no hot linking exists), and they can point to different points in a resource's timeline. For better deduplication and to facilitate chat / status features some payloads can contain an infohash that it is a delta against. This way, changes to a large document or other resource can be delta compressed - Instead of downloading the whole asset again, users just get a diff and use their cached copy. Periodic "squashing" or "rebasing" of the resource can keep a change set from becoming too lengthy.

Unlike Git and other distributed version controls, each individual asset can belong to multiple disparate histories. Optional per-site directories can have a time component. They can be more than a snapshot of a set of info-hashes mapped to names in a tree: Each name can have multiple info-hashes corresponding to a list of changes in time. Reverting a resource is simply adding a previous hashID to the top of the name's hash list. This way a user can rewind in time, and folks can create and share different views into the Distributed Hash Table File-system. Including a directory resource with a hypertext document can allow users to view the page with the newest assets they have available while newer assets are downloaded. Hypertext documents could then use the file system itself to provide multiple directory views, tagged for different device resolutions, paper vs eink vs screen, light vs dark, etc. CSS provides something similar, but why limit the feature to styles and images when it can be applied more generally to all content and their histories as well. Separate Form from Function; Separate Content from Presentation; Separate Trust from Provider; Separate Names from Locations; Separate Software from Instructions; Separate Space from Time... I mean, duh, right?

Data silos have always beet at odds with the Internet's inherently decentralized nature. This peer to peer architecture is its greatest strength, and why it can withstand censorship or thermonuclear war. The Internet can route data around cities in seconds if they disappear from its mesh. There is no such thing as a client or server at the IP level, everyone is a peer. The stupid centralized hierarchical approach is full of bottlenecks, DOS vulnerability, and despotic spying. Getting your data from the nearest peer prevents upstream sources from tracking your browsing habits -- They don't need to track us, TV and print ads never did.

The biggest elephant in the room is silly pre-internet OS design that doesn't work with our multi-device lives. This will end soon too. The virtual machine OS and it's cross platform applications will rule all hardware soon. People use hardware for the programs, not the OS. Instruction-sets are irrelevant, resistance is feudal, you will be ASCII-eliminated. Synchronizing and updating your data and upgrading your OS should NOT be a problem in this day and age. A distributed file system and cross platform OS can handle all of that for you. New OS image? It's just another file. The VM Operating Systems will compile the bytecode once at code-install and leverage both cross platform applications and native speeds, without additional program startup overhead. All you have to do is write a real OS instead of a dumb Kernel -- Protip: If your Kernel doesn't have a program bytecode compiler, you're doing it wrong. This will happen soon, I have forseen it: That is how my "toy" operating systems work, it is how Google's NACL Apps work, it would be a simple addition to Android. For a fast and dirty approach I could easily upgrade BSD's Unix kernel with LLVM's bytecode compiler. You should never distribute programs as binaries, only cross platform intermediary output. This enables per program selective library linking: Dynamic or Static. It's such a braindead obvious feature, that if POSIX wasn't design by committee cluster fuck, you'd have assemble on install bytecode formats as a replacement for ELF long ago.

I use a system similar to PGP fingerprinting for the update signatures and community trust graphs. Seed boxes can specify how much of a resource to save for backup / re-seeding. Families don't have to go through creepy 3rd parties like Facebook or G+ to share pictures, videos and comments with your friends and relatives. What if a device dies? Nothing of value is lost. We Separate File Names from Storage Locations, remember? The DHT-FS is built for redundancy. I can set how much or less redundancy I want for any view -- I have a backup view that I add resources and directory trees to so that even if I delete all other references, the data remains. All of your files, pictures, etc. should always be automatically backed up amongst your friends and families for as long as you live. Backups of images and videos happen immediately quite naturally primarily because all the grandparents go and view them all as soon as the "Summer Vacation 2014 [19 new]" appears in their notifications. RAID-G is amazingly fast, it has nothing better to do!

Folks can have a directory with private encrypted files. The normal remote view is just a directory named like "Joe Sixpack's data [private]", which stores a set of infohashes and files full of encrypted noise and an encrypted directory view. "Joe, I'm so sorry about the house fire. Come over any time if you need to use your private data before it's finished re-syncing." Joe gives his password to unlock the encrypted directory view and all the data is there, safe and backed up just like everyone else's data at everyone else's homes. PGP-like fingerprinting can be used for signing and encryption in the trust graph. Don't want to keep someone else's data anymore? Just unlink that folder name from all your views and its space will eventually get re-used.

Remember the old joke about "how big of a disk do I need to download the Internet"? That's a stupid joke, The Internet Archive does just that, in fact it saves multiple Internets through time. The whole web can be one giant archive. Not everyone has to store everything, but together we can have enough redundancy that no one need ever lose data again. Seed boxes can set who, what, and how deep to freeze. However, I am not the only one working on such systems, so I may not be first to market with my DHT-FS, but the distributed data revolution is as inescapable for any species as the discovery of nuclear power is.

Your generation is the first one growing up with a world-wide information network. You have many growing pains ahead as culture must adjust to the advances afforded those who survive into the Age of Information. Your publishing models will be remade, your education systems will be transformed, your economies will scream as those not agile enough seek to prevent rapid progress, your governments will seek to control all the newfound power for themselves, but fear not, the changes will occur: Life itself is decentralized. A world wide distributed file system is what the Internet deserves, and it will happen sooner or later. It is the natural progression of technology. The "best effort" routing is an advantage you will eventually leverage adequately whether you want to or not. The speed limit of light will ensure this happens as you reach for the stars. You will have settlers off-world soon, and they will want to access all of Humanity's information too. Thus, this type of system is inevitable, for any space faring race.

The sooner you learn to separate space from time and implement the delay tolerant space Internet on Earth, the better; The DTN will force you to separate name from location, to leverage deduplication. I know it, NASA knows it, everyone knows this is the true answer, all except SEO gurus apparently.

Comment: Re:Softball (Score 1) 402

by VortexCortex (#46804959) Attached to: In a Hole, Golf Courses Experiment With 15-inch Holes

What might make golf more accessible is building smaller 9-hole courses heavy on par-threes with more forgiving hazards and flatter greens.

That's making the game easier, not more accessible. It'll still take a lot of skill, and honestly you're using the wrong tool for the job. This is America, so consider letting players sight holes in their cross hairs and blow their balls using air cannon launchers. You could even have a course for cart-sized trebuchet builders for the mechanically inclined. That way, even quadriplegic folks could play. Mount a cannon to a cart or motorize a catapult and you can get the robotics folks involved. Robot Wars: Golf Edition. Now, that's accessible.

Know what would make golf even more accessible? Portable holes.

Comment: Get a dry erase marker and write on the screen. (Score 1) 167

by VortexCortex (#46800977) Attached to: Ask Slashdot: Professional Journaling/Notes Software?

Rsync your CherryTree file, or sync with whatever cloud storage solution you use, Google Drive, Microsoft NSAAS, whatever.

It's a bit limited for complex things, but it worked for some students I know tracking the majority of their note-keeping needs. Stopped using 3rd party solutions since I eat my own dogfood, and now have notes integrated into my distributed versioned whiteboard / issue tracker / build & deploy & test product. I have issue/note/image annotation plugins for coding with Netbeans, Eclipse, Visual Studio, Emacs and Vim -- Which reminds me of a Vim plugin I just saw that you might find useful... if you can run a (home) server (and port forward around NAT), then install Wordpress on a LAMP stack (in a VM, because PHP exploits) -- I'm pretty sure Emacs has all that built in by default now: C-x M-c M-microblog.

I jest, it's just Org mode. Save your .org to your Git repo, and away you go.

Comment: Oh fuck the what? (Score 4, Insightful) 207

by VortexCortex (#46800745) Attached to: Cody Wilson Interview at Reason: Happiness Is a 3D Printed Gun

That's one hell of a strawman you've got there. I'm not an anarchist myself, but I'm not sure you've ever actually met many anarchists before if that's what you think of them. Sounds like you've conflated anarchy with chaos -- that's just silly. There are many native peoples that live quite happily in anarchy. Self defense is an important aspect of anarchy. Note: The USA supreme court has ruled that it is not the duty of the police to protect anyone. They can't help you or your loved ones until they have already been victimized. The founding fathers of the USA also believed in a well armed militia. It is your duty to protect yourself, your loved ones, and your property -- Just like it is under anarchy... So, really, making weaponry more available is a good thing. Accidental shootings are rare, far more kids die in bathtubs or crossing the road than from accidental shootings, to say nothing of riding in cars themselves. Folks are OK with people building custom bathrooms and cars... right? Criminals don't care about gun control laws anyway.

I use a custom 3D printing rig for my robotics projects, and this gun project is AMAZING. Who doesn't want sturdier robots? Now, here's something interesting: How many technological advances can you think of that were not quickly militarized? Electricity? Nope. Uhm, radio? Nope. Cars? No -- hell, even horses were militarized. Computers? Nope, code makers and breakers. Telescopes? Immediately found their way to the battle field. Even our beloved RC cars, model airplanes and robots are becoming military drones. Did you know the US government reserves the right to option any patent for their exclusive secret (military) use? That's why patent applications are still secret even though first to file exists.

Making guns is human nature. We've been crafting weapons with unlikely materials for millions of years. Break this rock, and tie it to that stick and you can make a spear! However, this 3D printed gun is more of a proof of concept, and it's important because guns involve coping with extreme heat and pressure. It's sort of the same way that other than for boring entertainment or a very expensive hobby race cars are mostly pointless, except that many expensive impractical innovations from race cars do eventually make it into street cars for better safety, efficiency, speed, etc. I can hardly think of a better Olympics of 3D printing than gun making.

Also, "bits of plastic"... I can 3D print with metals using a simple welding rig. The resolution is shit, and requires lots of polishing afterwards, but the results are OK considering it's make-shift adaptation to a reprap, and they will only get better. If we can improve the durability of 3D printing, then you might order things at your computer and pick them up from the local hardware store in the "printware" section. Perhaps they'd have some thing-of-the week demo units of things to try out, printed while you wait, or delivered with your next pizza. Then we could drastically reduce our shipping infrastructure by producing products right in the stores, only shipping the raw materials to feed the printers. Other things like cars which you'd want certified MFGs to assemble could even be customized on demand -- Select a bigger cargo area, or narrower for tight spaces, get your logo crafted into the design.

Hell, we could even work our way up to custom designed 3D printed space craft, you'd have to bake the ceramic shields though. I've even made my own super capacitors by layering the ceramic clay and aluminum foil and baking it in the kiln (vertically, with the edges folded closed, only the lower 1/3rd retained its metal and became a huge capacitor. My welding rods deposit too thickly, but better metal and ceramic 3D printing could yield things with built in instant-charge inductive cells too one day. It's a ways off, esp. with entrenched market forces, but that's what refining 3D printing material science by making guns can lead us to.

If you're opposed to 3D printed guns, I would encourage you to NEVER drive a car. In fact, stay indoors at all times, and only eat health food, heart disease is one of the most dangerous things on the planet.... But fortunately we're working on 3D printed replacement hearts.

Comment: Don't. Be ridiculous. (Score 1) 207

by VortexCortex (#46800405) Attached to: Cody Wilson Interview at Reason: Happiness Is a 3D Printed Gun

I agree, but you don't even need a machine shop, lathe, etc. to build a gun. You can build a pretty sturdy zip gun with some pipe and fittings from your local hardware store. They even sell 22 caliber rounds for driving in nails so you can build the whole gun, projectiles and all, right there in the store. Get some real bullets at Walmart later. Look, we're all "nerds" here, home made guns should be part of any contingency scenario for your zombie plan; Help a geek out.

Makeshift "zip" guns are even studier than a 3D printed gun is right now. Eventually 3D printed materials will be even better than subtraction technologies, since we can influence fine structural detail. But right now, 3D printed guns are WAY down the list on essential zombie preparedness kit items (it's like a hurricane or earthquake kit, but with more shotguns).

If you're in the US, today is a great day for a zombie attack. There are folks gathering away from their homes in large quantities, and running around collecting and eating food off the ground. Even if you don't get visited by the Easter Zombunny, today is a great opportunity to teach kids foraging skills. Remember, in the event of an outbreak: Always hunt responsibly, steer clear of tasty traffic bottlenecks, and she is not your mother-in-law anymore.

Comment: Re:Quatity is not quality (Score 4, Interesting) 374

by VortexCortex (#46799523) Attached to: OpenSSL Cleanup: Hundreds of Commits In a Week

I cant talk for C, but in Java

Haha. Oh man. Java is a VM. Do you check for "dangerous constructs" like Just In Time compiling of data into machine code at Runtime and marking data executable then running it by the Java VM? Because that's how it operates. Even just having that capability makes Java less secure, don't even have to get exploit data marked code and executed, just have to get it in memory then jump to the location of the VM code that does it for me with my registers set right. Do any of your Java code checking tools run against the entire API kitchen sink of that massive attack surface you bring with every Java program called the JRE? Do they prevent users from having tens of old deprecated Java runtimes installed at 100MB a pop, since the upgrade feature just left them there and thus are still able to be targeted explicitly by malicious code? No? Didn't think so.

Don't get me wrong, I get what you're saying, Java code can be secure, but you have to run tests against the VM and API code you'll be using too. Java based checking tools produce programs that are just as vulnerable as C code, and actually demonstrably more so when you factor in their exploit area of their operating environment. Put it this way: The C tools (valgrind) already told us that the memory manager was doing weird shit -- It was expected weird shit. No dangerous construct warning would have caught heartbleed, it's a range check error coupled with the fact that they were using custom memory management. The mem-check warnings are there, but they have been explicitly ignored. It would be like the check engine light coming on, but you know the oil pressure is fine, just the sensor is bad... so no matter how bright of a big red warning light you install it can't help you anymore, it's meaningless. Actually, it's a bit worse than that, it would be like someone knew your check engine lights were on because of some kludge they added for SPEED, and so they knew they could get away with pouring gasoline in your radiator because you wouldn't notice anything wrong until it overheated and blew up -- AND you asked them about the check engine light a few times over the past two years, but they just shrugged and said, "Don't worry about it, I haven't looked under the hood lately, but here's a bit of electrical tape if the light annoys you."

I write provably secure code all the time in C, ASM (drivers mostly), even my own languages. CPUs are a finite state machines, and program functions have finite state as well. It's fully possible to write and test code for security that performs as it should for every possible input. For bigger wordsize CPUs, Instead of testing every input, one just needs to test a sufficiently large number of them to exercise all the bit ranges and edge cases. As you've noted, automation is key. If you want to write secure code you have to think like a cracker. My build scripts automatically generate missing unit test and fuzz testing stubs based on the function signatures. Input fuzzing tests are what a security researching hacker or bug exploiting cracker will use first off on any piece of code to test for potential weakness, so if you're not using these tests your code shouldn't touch crypto or security products, it's simply not been tested. Using a bit of doc-comments to add a additional semantics I can auto generate the tests for ranges, and I don't commit code to the shared repos that doesn't have 100% test coverage in my profiler. If OpenSSL was using even just a basic code coverage tool to ensure each branch of #ifdef was compilable they'd have caught this heartbleed bug. I recompiled OpenSSL without the heartbeat option as soon as my news crawler AI caught wind of it.

Code review, chode review. These dumbasses aren't using basic ESSENTIAL testing methodology you'd use for ANY product even mildly secure: Code Coverage + memory checking is the bare minimum for anything that has to do with "credentials". They apparently also have no fucking idea how OS memory managers operate (they do memory re-use, that's why we even need Valgrind, because some accesses to freed memory won't SEGFAULT since your program is reusing recently deallocated memory on the next malloc). That's why I find the whole "optimization on by default for SPEED" suspicious since it was only needed on few platforms; Especially suspicious considering they hadn't tested the other side of the #ifdef for years, AND claimed they couldn't disable the custom memory management expressly because compiling against equivalent standard mem manager code hadn't been tested in years? No, that's NOT acceptable! If your mem-manager code hasn't been tested versus the default stdlib's malloc then it just plain shouldn't be in use publicly -- especially not in an industry standard security related product. How the fuck else do you compare code branches to test your memory manager?! How the fuck else do you even know it's better than malloc() and free()?! The fact of the matter is that the OpenSSL codebase has so many commits now because it is trash. That's what happens when you set out to make a big-int math lib and it evolves into a security product. No one should expect that shit to be even slightly secure unless it was re-engineered with proper dev toolchain and unit test / input fuzzing generation framework.

Honestly, if I were trying to break OpenSSL I might accept a patch on new years when no one else was really paying attention that exposed the huge memory reuse vulnerability red-flag I've been waiving people away from testing for years... Sounds like a bunch of "plausible deniability" to me. Just like when OpenSSL's entropy was REMOVED from its random number generator back in '08. Yes the OpenSSL maintainers went dark and silently moved to another issue tracking... a huge problem for an open source security product (getting a sense of their typical dumbassery now?), but the Debian maintainers that allowed that patch to their OpenSSL without at least running it past upstream should have been sent out to pasture too. You DO NOT FUCK WITH a security product's RNG on a whim! That shit is hard to get secure even if you're not being evil. However, if the OpenSSL devs had a simple entropy test in their RNG test harness no one would have been able to make the mistake that made all the SSL keys bogus last time.

Sorry. OpenSSL is dead to me now. I'm a rationalist. I don't have to think in binary terms. I can entertain several possibilities weighted at different degrees of certainty. I'm not 100% certain of anything, but what I do know to a high degree of certainty (enough that it's not worth risking use of it) is that OpenSSL and everything else is being targeted for anti-sec by all the world's 3 letter agencies, and the fuckers that screwed up HUGE, TWICE are going to stay in charge? These maintainers should step the fuck down. I wouldn't even trust them to keep backdoors out of a minesweeper clone. It's a good thing the OpenBSD devs are ripping out the bullshit, but without a complete rewrite and change of guards I'm not going to risk it. It's a waste of time, IMO. Just use something else or start from scratch -- That'll take the same time as stripping it down to the core components, test the hell out of them, and refactor everything with proper tests any security product should have. The scary thing is that OpenSSL is in a better state than most of the proprietary SSL code I've seen in the wild.

If you want something done right: Do it yourself. You can't trust the court jester with the keys to your kingdom. These bigint math, hashing, and cipher algorithms ARE NOT HARD to implement. For the things I need secure I just use my own implementations (and ciphers). Heartbleed didn't mean shit to me. And for the rest of you folks pretending you give a damn about security, Stop complaining! You look like idiots. I don't really know what all the fuss is about. Oooh! All the keys are exposed. So fucking what? Any security researcher worth their salt knows that the CA / PKI for TLS is completely and utterly broken anyway. It's a giant security theater. It's like you're flipping out because someone's shoes didn't get checked in line at the airport meanwhile some unknown unchecked kid is stowing away in the wheel well of your airplane, and freezing to death -- Good thing he wasn't a terrorist!

Any CA can create a cert for ANY domain, and if you go view certs ( FF -> preference -> Advanced -> Certificates -> View ), then you'll see known bad actors explicitly trusted as roots right now in your fucking browser! You have The Hong Kong Post office as a trusted root and you give a fuck about heartbleed?! Russia, China, Iran, etc. countries yours is probably at cyber-war with right now are also trusted? ANY ONE OF THEM compromises ANY server between the endpoints and they can MITM the SSL connection, you'll get a big green secure bar and everything. You could use the CA system with 100% self signed certs for Intranet security, but for anything else it's fucked: No one checks the cert chain manually, and if they did how do you know your email provider didn't switch CA's? YOU DON'T, and any attempt to find out is susceptible to the same MITM.

With national security letters and gag orders flying around no one can trust any CA in the world, and you have to trust ALL of them to not be compromised for SSL infrastructure to be secure -- The likelihood of the CA system being uncompromised is actually below 0%, we have governments admitting to wholesale spying all over the world and boasting about their buildings full of people who's job it is to make sure the CA system, OpenSSL, etc. are not secure. In my estimate I'd put the CA system at -9001% secure, since any competent security researcher would have NEVER built the system such: It's a collection of single points of failure that is so fragile even one failure could compromise everything. Remember diginotar? This CA system shit was built to be insecure by design. It was broken before it was even implemented, IMO. It's not like PGP trust graphs don't exist. It's not like we never revised the system either, so that's no excuse. Too bad convergence breaks every other build of FF. Heartbleed is overblown because ALL YOUR SSL KEYS are bogus the moment you created them, and so are your new ones.

Heartbleed doesn't affect me at all. I operate with the understanding that everything online, even data in an SSL connection, is equivalent to writing it on the back of a post card. Look, HTTP-Auth exists, Modify it slightly so that it pops up a box on secure connection attempt BEFORE DISPLAYING ANY PAGE (typing a password in on a web form is already game over, fools). Server sends nonce-0 and server GUID (or just use the domain), Client sends nonce-1, UserID, and negotiates stream cipher to use. Use HMAC to derive proof of knowledge: HMAC ( passphrase, User ID + Server ID + nonce-0 + nonce-1 ) = proof of knowledge. DONE Do not exchange the proof. Just key your chosen stream cipher with it at both ends of the connection and begin communicating over an encrypted tunnel that no MITM can get. I use the HMAC with a key-stretching hash of the inputs to make brute forcing take a bit longer, the ends can specify min and max iterations and even thousands of iterations is faster than TLS key exchange. If the user has a password saved then don't let an unsecured connection happen, simple. This protocol is what my custom VPN uses. We have logins at the places we need security anyway. Yes, someone could intercept account creation, but at least with the pre-shared key method that's a small window -- The only time you need PKI is to exchange the passphrase (or HASH( APP_salt + master_password + serverID ) in my case). Fail to capture account creation and no more MITM. You don't even need a CA system since the window is so small. At least with pre-shared secrets you have the CHANCE to exchange the secret out of band: Go to your bank physically, hand the PW to someone in person. The CA system ensures that every connection can be MITM'd regardless of how you exchange the password. Any security researcher who EVER thought that the PKI hierarchy offered any security should not be trusted. Think about it: It just inserts a CA trust as a potential MITM for every mother fucking connection!

Heartbleed woes are ridiculous. It's like you're sitting there on the side of the road with your car broken down -- Transmission stripped, engine thrown a rod, on fire; You're OK with that, but then a passing truck dings your windshield and you flip right out. That pebble you're losing your shit over is heartbleed.

Comment: Re:Guns are not contraband (Score 1) 155

by VortexCortex (#46798283) Attached to: New 'Google' For the Dark Web Makes Buying Dope and Guns Easy

In the United States you have a right, and a duty to train and learn how to use firearms effectively.

Well, if D&N taught me anything it's that throwing all your experience into one specialzation is folly. Civilian firearms are literally kids play. I looked at the export controll list, then became a crypographer.

Comment: Re:I'm not worried about poor students (Score 4, Insightful) 389

by VortexCortex (#46798097) Attached to: Ask Slashdot: Hungry Students, How Common?

Getting to the point? We're there. We passed that threshold a while ago.

Correct. However, what many fail to realize is that in the 70's we didn't need to pay the educational extortion racket for permission to get work. The computing explosion was exploited to force the majority of the populace to seek degrees, but elementary school kids now have mastery of required technologies. The tools are more high-tech but the interface is even simpler than ever, certainly things that could be learned in on-the-job training.

The folks bitching about not being able to afford degrees are fools just now feeling the effects of an education bubble about to burst. The tech that created the education bubble has brought ">advances that made degrees obsolete. You can always tell a bubble by the final pump and dump of ramped up attempts to cash in on overly optimistic valuation. You are now aware that degree mills exist...

The requirement for college accreditation has always been a method for discrimination against the poor who would otherwise self-educate. More stringent degree requirements are a means by which corporations can drive down wages and get more government approved H1B visas and outsourcing. In reality, requiring employees to have a final exams is foolish since it doesn't actually prove they know anything at all -- That's why your boss is likely a moron. Entrance exams would instead suffice to prove applicants had the required knowledge and skills, without requiring they be saddled with debts by the educational gatekeepers of employment -- It doesn't matter how you learned what you know. Promoting to management from within makes cost cutting improvements in ability to predict and not make unrealistic expectations upon the workers, it also gives upward mobility to aging experienced workers instead of considering them dead at 40 (family raising age).

We're already on our way of getting to the point where you cannot recover your college fees during the rest of your working years.

Negative, debt levels have long since passed that point, and owing a debt to the careers you enter has always been unacceptable in the first place. College as anything more than elective learning college is just shifting around the Company Store by leveraging "intellectual property." We need college degrees less now that in the 70's. ::POP::

Comment: Ah yes, planet "nothing else", a fool's paradise. (Score 4, Interesting) 70

There is nothing else the planet. Should be working on. Except stopping these.

Yes there is. Self sustaining off-world colonies AND asteroid deflection technologies go hand in hand to help fight extinction -- which should be priority #1 for any truly sentient race.

Clearly asteroids are a very real threat, and I black-hole heartedly agree with the notion that Earth's space agencies are not giving them the level of public concern these threats should have: Humans are currently blind as moles to space. Any statement to the contrary is merely shrouding the issue in the Emperor's New Clothes. Earth's telescopes can study very small parts of space in some detail, but do not have the coverage required to make the dismissive claims that NASA and other agencies do about asteroid impact likelihood -- note that they frequently engage in panic mitigation. Remember that asteroid transit NASA was hyped about, meanwhile another asteroid whipped by completely unexpectedly closer than your moon, too late to do anything about? Remember Chelyabinsk? That one was 20 to 30 times Hiroshima's nuclear bomb, but it didn't strike ground. What kind of wake-up call is it going to take?! You'd probably just get more complacent even if an overly emotional alien commander committed career suicide in the desert to take your leaders the message that Earth was surely doomed without a massive protective space presence -- If such a thing ever occurred, that is.

Seriously, the space agencies are essentially lying by omission to the public by not pointing out the HUGE error bars in their asteroid risk estimates. I mean, Eris, a Dwarf Planet, was only discovered in 2005! Eris is about 27% more massive than Pluto, and passes closer in its elliptical orbit than Pluto -- almost all the way in to Neptune! Eris is essentially why your scientists don't call Pluto a planet anymore. They deemed it better to demote Pluto than admit you couldn't see a whole planet sitting right in your backyard... And NASA expects you to believe their overly optimistic estimates about far smaller and harder to spot civilization ending asteroids? Eventually your governments won't have the luxury of pissing away funding via scaremongering up war-pork and ignoring the actual threats you face, like a bunch of bratty rich kids.

Asteroids are only one threat, and one that we could mitigate relatively easily given advanced notice of their trajectories. However, Coronal mass ejections, Gamma ray bursts, Super Volcanoes, Magnetosphere Instability, etc. are all also severe threats that humanity can't mitigate with telescopes and a game of asteroid billiards alone -- Though fast acting manipulation of the gravitational matrix via strategic placement of asteroids could help with CMEs or gamma bursts too once you had a sufficient armament of even primitive orbiting projectiles. The irregularity in your magnetosphere should be particularly distressing because it is over 500,000 years overdue to falter and rebuild as the poles flip (according to reconstructions of your geo-magnetic strata) -- It could go at any time! Given the current very abnormal mag-field behavior you have no idea if it will spring right back up nice and organized like or leave you vulnerable to cosmic rays and solar flares for a few decades or centuries.

You should be grateful that the vulnerable periods of mag-pole flops halted as soon as humanity began showing some signs of intelligence -- even if this is absolutely only a mere coincidence. Mastery of energy threats will remain far beyond your technological grasp for the foreseeable future, but your species can mitigate such threats of extinction by self sustaining off-world colonization efforts! In addition to getting some of your eggs out of this one basket, the technology to survive without a magnetosphere on the Moon and Mars could be used to save the world here on Earth. In the event of a worst case scenario, humans could then repopulate Earth all by themselves after the dust settles from a mass extinction event. It's nearly unfathomable that anyone could sit comfortably in their gravity well thinking theirs may be the only spark of intelligent life in the universe while considering prioritizing anything above extinction prevention. If ancient myths about post-death paradise can invoke enough apathy that you would risk letting the only known spark of life go out, then yours is not a sentient species. Yes, you have all the space-time in the world, but those days are certainly numbered!

Those averse to human exploration of space now are not self aware and sentient beings. In fact, were I an alien overseer -- and I am most certainly not admitting that I am -- then based the lack of exploration beyond your magnetosphere over the past 40 years I would recommend we cut our losses and take your species off the endangered sentience list. I imagine -- as a purely hypothetical speculation -- that if humanity did owe an advanced alien race one hell of a tab, and showed no indication of ability to repay it for the infinite future, that one of them might risk violation of technological contamination statutes and propagandize the suggestion for you to get your asses to Mars and colonize it as soon as humanly possible -- which would have been about 67 years ago, if you hadn't wasted so much time murdering yourselves. Even if exposing a clear and troubling picture of humanity's place in the universe were an overt violation of some alien version of your fictional prime directive, it's not like one would not seriously need a permanent vacation after only a few decades of witnessing humanity's failure after mind-blowing failure to commit to ANYTHING resembling progress as a space faring race!

Perhaps one would rethink their benefit package at the last second, and bury their contemptuous assessment in a reply to an AC.

Passwords are implemented as a result of insecurity.