Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment synology + backblaze b2 (Score 1) 135

if you're really into tinkering with stuff, sure you can build your own and you will have fun doing that. Most people don't need ridiculous spec hardware at home, you are not running data center storage loads and your network speed is likely to be slower than anything the NAS itself can do internally anyway.

I'm not really into tinkering at that level anymore and so I just use a synology NAS that backs up to backblaze B2. It's all automated, works great. I have a small VM that I use to collect logs and do a couple other minor things that I can run on it, it has Surveillance Station which I can use for cameras, etc. It's been a great way to combine a few utility devices into one central thing that has vendor support if I need it. The offsite backup works perfectly and is very fast over residential gigabit fiber.

The main downside I've found is replacing disks to increase volume size can be kind of a slow process depending on how full your NAS is and how large the disks are that you're replacing, so if you plan to replace smaller disks with larger ones you need to plan for about 1 day between disks with about 5 minutes of actual work per day to hot swap them. But this is a pretty infrequent activity for most people.

Prior to that I had used Drobo (RIP but also for fairly good reason) and sorta cobbled together stuff. Synology works better and has been more reliable than either one for me.

Comment you have to do this thinking now because: (Score 1) 45

1) people are storing data now that needs to be protected past the time that "quantum" hits. So safe algorithms are needed even now.
2) large scale systems with lots of parties etc. take forever to effect change. You have to get everyone to agree that something needs to be done, then get them to agree on what to do, then get them to actually do it. Barring a worldwide disaster/alien attack/etc., this just won't happen in a matter of weeks, or months, or even years in some cases. It can take decades to get industries to move off of unsafe algorithms even when you can demonstrate an actual danger. If something's going to be a problem in 8 years in those industries, and you aren't actively trying to solve it now, you are not gonna fix it in time.

Comment Re:I admit it, I don't have a clue (Score 5, Informative) 20

https://www.iana.org/dnssec/ce...

Here is a link. The ceremonies are performed to do any cryptographic operations which require a Root Signing Key. When you need to use such a key, you usually have to get a number of people called "key custodians" who each have independent physical access to one part of the cryptographic key, usually stored on a smart card or other secure token device. You will usually have an overall number of custodians and a certain quorum of them will need to be there for a given operation. Like, six of ten, three of seven, etc.

They all have to get their fragment of the key (their assigned device) which is usually stored in a safe which only they have access to. Then they all need to be in the same room, usually a SCIF (think a bank vault with a data center inside it). Whatever process they run will ask for their components individually, and then once the required number of components have been entered, the system will reassemble the master crypto keys and do whatever it needs to do.

The process is designed to make sure that fraud is very difficult and cannot happen without being detected. All the systems and physical access along the way will typically be monitored, controlled with biometrics and other secure mechanisms, and easily auditable. Any activity requires an intentional quorum of people to agree to do it, so you can't just get one guy to go do something bad.

It is kind of like nuclear missile launching, the root of a certificate authority, the root of a financial processing crypto scheme, etc.

In this case, sounds like something broke down and they can't get into a safe or some other secure location to retrieve key components. Usually these systems are designed to fail secure except in the case of life safety (i.e. you can get out if there's a fire, it just creates a huge audit nightmare).

Comment What about capital expense? (Score 2) 202

You're looking at one aspect of the budget. Non-labor expense is usually stuff like paying consulting firms, "cloud services," buying advertisements, paying for training, etc. Capital expense is where you typically book things like servers, enterprise software, storage, etc. So this could be a company who spends a ton of money on marketing crap, or it could just be a company that spends more on external advertising buys and focus studies than it does on sending IT guys to training and outsourcing business apps. Without looking at the total picture it's hard to say what they really invest in.

Comment Check their contracts etc. (Score 1) 238

You say that you are "connected to" the network but you don't say what this relationship actually is. If you are hosted by the hospital (i.e. actually part of their network), then they may have an information security department who is checking all the hosts that are on their network. This may or may not be part of the contract, either as a service provided or something that is required by the contract or hosting arrangement.

If you are not actually part of their network or hosted by them, there may still be something in the contracts that says that they can do this sort of penetration testing with partner companies. It isn't the best idea to accept this as a contract term, but I have seen it requested before and it may have been in there with nobody to notice it.

I would say that whoever handles the arrangement with the hospital should probably talk with their counterpart on the hospital's side about this and learn more about why it is happening and what is done with the information.

With respect to the various posts that have/will happen about HIPAA, I would say that it's totally possible (and desirable) to have a proactive information security policy that can still comply with regulations. Proactive penetration testing is not prohibited.

Comment Re:That's what you get (Score 3, Informative) 60

That's not exactly the point. Sure, if a switch is sparking, then it is broken. The point of this gear is that it has been built such that if it breaks, it won't be able to emit dangerous sparks that might do something like cause an explosion in the presence of a buildup of gas or whatever. It still has to be replaced, just like the non-hardened switch, but it is less risky to deploy in an environment where such hazards might be present.

Comment Simple (Score 3, Interesting) 142

Microsoft supported it, Google opposed it. What more proof do we need that this act is evil? Propably none and even if some then not much. Nevertheless the articles linked in this story even if not bad in content still may be quite hard to follow for anyone who hasn't got an opinion on this matter yet. You can find much more information in the Wikipedia article: Leahy-Smith America Invents Act and even more in the articles linked in the references. I strongly recommend reading it all because otherwise we risk to draw uneducated conclusions from the aspects of this story that may seem obvious but actually are not that obvious for anyone educated in the intellectual property law. Some of the implications of that act would be rather scary so we really need to take some time to fully research the subject and unlike the Redmondmag, the so called "independent voice of the Microsoft IT community", the Wikipedia is actually worth reading.

Comment Performance (Score 1) 329

The main point is performance. Ryan Dahl wanted to write fast, scalable servers easily. We all know for years that threads don't scale but event loops do (see the second chart of memory consumption of apache vs nginx). Of course in order to have a highly concurrent evented server you can't use blocking system calls (which were a big mistake in my opinion to begin with - they are the only reason why you needed threads exposed at the application level for concurrency in the past). OK, so we want a portable, high performance, event-based, async-I/O, scalable, highly concurrent server. The obvious way to write such servers in a portable, OS-independent way was to write them in C using libraries like libev or libevent for event loops and libeio for non-blocking I/O. The result is great. But the problem is that it is not easy. C doesn't have lambdas, anonymous functions, closures or higher-order functions in a real sense, which all would make writing event handlers much easier. So Ryan was looking for a higher level language and found V8, the JavaScript virtual machine written by Google for Chrome. JavaScript has anonymous functions and closures. And V8 is fast. And also when you write JavaScript in the browser then you never use blocking function calls anyway, so people are already familiar with asynchronous I/O, events, callbacks, closures, futures and promises. Hell, you can even use Y combinators in JavaScript if you know your craft. Now, if only JavaScript had lazy evaluation and proper tail call optimization - maybe some day. Watch some talks by Ryan Dahl if you're interested and after 25 years in the field you should be. Oh, and Node doesn't have anything to do with the browser besides the V8 origins. It's all server-side. See the Wikipedia article on Node for more info and code examples. I'm glad that people who have been professionally programing for so many years are still willing to broaden their horizons. As I have written in the past it is not a universal property of programmers unfortunately. Have fun with new tools.

Comment Click (Score 1) 329

Out of curiosity I looked at your link to Node. Then at the explanation about what the project is. It fits in half a line: "evented I/O for v8 javascript" and I have no idea what that means, even after 25 years of pro programming.

Actually it says:

evented I/O for v8 javascript - Read more
http://nodejs.org/

Surely clicking one of those links would be faster than asking for it on Slashdot and waiting for an answer? When you click the "Read more" link that is not even half an inch from what you've quoted you can find a big "Resources for Newcomers" section with links to the wiki and the home page.

JavaScript is of course the programming language. V8 is its high-performance implementation developed by Google for Chrome. I/O means input/output and evented means that it is asynchronous I/O based on event loops. I think that after 25 years of pro programming you should know that, and if you don't then you should at least know how to follow the hyperlinks to find it out.

Fairly typical of undocumented open-source projects, unfortunately.

Well if the only place where you look for documentation is the title of the project on GitHub then yes, it is fairly typical.

Comment Node (Score 2) 329

I suggest diving into Node. It is written in a very competent way, it's fast, small, efficient, nicely documented, does the IO correctly so no messy blocking function calls and threads synchronization madness, and is pretty young so the code base is not too big for one person to understand. Thanks to npm it is also very easy to write modules that are small, clean and have minimum boilerplate code so it's not like writing Java. There is a lot of code to be written so you may find writing and publishing your own useful modules pretty soon. Good luck!

Comment Nuclear Power + Genetic Modifications (Score -1, Flamebait) 90

Before anyone has a knee-jerk reaction and says that it is bad because it's about nuclear power and genetically modified life forms, let me summarise for you the most important result of this research in the most straightforward way possible:

nuclear energy + genetic engineering + nanoparticles = clean planet

Now, if those so called environmentalist are really fighting for cleaner planet and healthy energy then they must support this technology. If they oppose it, then it is a clear proof that their motivations are not as clear as they wish us to believe. Anyone who is truly concerned about our environment must admit that there is no cleaner energy source then nuclear and using genetically modified microbes to clean up the nuclear waste is the last nail to the coffin of the opposition to the use of nuclear energy. I don't care about CO2 because this is what plants are breathing, and quite frankly I'd prefer having a little bit warmer climate, but I do care about polution and using clean, not necessarily renewable, energy sources is the answer to that problem.

This is an example of great research. I am proud that it was all done by a team of female researchers.

Slashdot Top Deals

A bug in the code is worth two in the documentation.

Working...