Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Re:links to NIST (Score 1) 134

First of all, you mention the TCP/IP checkum, at the same time as key exchanges, long PSKs, and money transfer. Which is it, simple message integrity or message authentication? If you need authentication, you shouldn't even be mentioning TCP/IP checksums. If all you need is integrity, then just append an MD5.

Assuming you need authentication: I don't have any opinion on Blake2, but you should use an HMAC instead of just appending the shared secret key and taking the hash. HMAC can be built on top of any hash function, including Blake2. You should also make your MACs at leasts 128 bits long, preferably 192 or 256. 64 bits is definitely bruteforceable these days. I wouldn't settle for anything shorter than 256 for a protocol that involves money.

Keep in mind that a basic message authentication code is only one piece of a secure protocol. There are many possible attacks that might slip through (e.g. replay attacks, MITM, timing attacks, ...). For example, you need to ensure that an attacker can neither reinject a message from the current session back into it (e.g. by using a sequence number in the signed payload) nor reinject a message from a different session (e.g. by ensuring that there is a random component to the shared secret) nor somehow guess the right MAC code (e.g. by making sure that the wrong-MAC responses do not leak any information, neither in contents nor timing), and of course there's always the host identification problem (preventing a simple MITM where the attacker performs a key exchange with both sides and resigns/reencrypts traffic both ways). Honestly, your first choice should be to use an existing well-tested protocol, such as TLS (you can build your own host verification rules on top of it, you don't have to use the "well-known" root CA list). Failing that, consult a real crypto expert.

Comment Re:links to NIST (Score 5, Informative) 134

For password hashing, that is correct. However, cryptographic hash functions are not designed for such use (and yes, all the websites and services out there using plain hashes for passwords are Doing It Wrong, even if they are using a salt). You can build a good password hashing scheme out of a cryptographic hash function (for example, PBKDF2), but plain hash functions are not suitable (precisely because they are too fast).

Fast cryptographically secure hash functions are a Good Thing, so you can hash a given block of data (and thus compute its digest and e.g. verify its digital signature) as fast as possible. This speeds up things like GPG, SSL, Git*, good old sha1sum checksums, etc. If you then want to use such a hash function as the core of a password hashing scheme, you can compensate for the extra speed by simply increasing the number of iterations. Making a hash function slower is always easy.

*Git is pretty much stuck with SHA-1 for now, but future incompatible versions of the repo format could conceivably switch to a faster hash function if it made sense.

Comment Re:It's a very sad thing to admit, but (Score 4, Interesting) 260

Which means the Optimus solution isn't actually all that bad. I have the opposite viewpoint: I bought an Optimus laptop assuming the nvidia wouldn't work, simply for the other specs and the Intel video. When it turned out that bumblebee worked fairly painlessly and I was able to use the nvidia to accelerate 3D while the Intel drove my displays, I was pleasantly surprised. The solution is a bit of a hack, but honestly, I don't really have anything bad to say about it. It's the best of both worlds: open Intel drivers which are stable and support modern interfaces like XRandR 1.3 and KMS driving the displays, and the clunky proprietary but fast nvidia driver sandboxed in its own backgrond X server doing 3D acceleration only.

Comment Re:Sensationalist article stating the obvious (Score 2) 218

It is true that the law grants the copyright owner the right to restrict the creation of copies, but It can be reasonably argued that by posting the code on GitHub you've implicitly given consent to the mere creation of a copy (as would automatically happen if you view the code or download it). For most practical purposes, the limitation on distribution is what matters.

Comment Re:Will this chargers be "always on"? (Score 1) 82

How did you measure the power usage? If you used a cheap power meter that does not have accurate power factor measurement, then your measurements are completely useless. Idle switching power supplies have very low power usage, but a very low power factor, because they act as capacitive loads. This means that a naive current meter will measure all of that out-of-phase current and you'll end up with a grossly inflated power figure. A proper power meter measures both instantaneous voltage and current many times during each AC power cycle, and can therefore report both real power and apparent power. If you measure an idle switching wall wart with such a meter, you will see a low W (real power) figure and a high VA (apparent power) figure. Residential customers are usually charged for real power only.

Comment Re:Too much of a good thing? (Score 3, Interesting) 82

That isn't caused by overcharging, it's caused by the battery simply being at 100% charge. Li-ion batteries like to be stored at 40% charge, and degrade much faster at 100%.

The technical solution to this problem is a trivial firmware change to the charging controller to only charge the battery to 40%. However, I suspect nobody has done it because nobody has figured out how to get the users to switch to "40% maintenance charge mode" when always plugged in, without pissing them off when they run off and discover that their device is only 40% charged. The fundamental problem is being able to predict when the user will need to actually use the battery, and only fully charge it immediately prior.

Comment Re:Word (Score 0) 586

It's successful itself, but not that successful for its users, though. One of the chronic problems of Arduino is that many people become mentally attached to it and exhibit extreme reluctance to ever move beyond it, learn how it really works, or (gasp) build a project that isn't an Arduino shield. When they find a problem that the Arduino isn't great at, they go to great lengths to solve it while keeping the Arduino (using e.g. more external logic, more than one Arduino, or, in extreme cases, throwing an FPGA on top of the Arduino - and implementing a coprocessor on it that is more powerful than the Arduino itself).

Very often, these problems can be solved much more efficiently by building your own hardware from a bare microcontroller, which is actually a very easy task - the Arduino is little more than a breakout board. Alas, while some Arduino users do realize this (and use bare AVRs, both the ones on Arduinos and other models, or even other brands, to great effect), most do not. Now that Arduino is moving to ARM, this will get marginally worse - if people aren't taking the plunge and figuring out how to use a bare AVR (which is completely trivial to bring up on a breadboard), moving to a surface-mount part that has slightly more complex requirements is not going to make things better.

The same thing happens, to a different extent, on the software side. I've seen many projects that are coded as a humongous unreadable Arduino sketch, when by that point the authors really ought to have learned how to modularize their code (and format it properly, too). It doesn't help that the Arduino IDE isn't a particularly great text editor.

The Arduino is a great learning platform, but it does the "make it easy" experience so well that people are very afraid of moving beyond it, and this ends up creating an artificial learning curve further down the line that is hard to get past. If you're forced to experience bring-up of a micro on a breadboard from the first day, you quickly figure it out (it's really easy), but if you've been comfy in Arduino land for months/years, it feels a lot scarier than it really is. (What do you mean I have to provide my own regulated power supply? And the pins, they don't have labels! Where is the USB connector?). I don't even blame the authors of Arduino; for example, they do include code that can program a bare AVR using an Arduino as a programmer (with visual wiring diagrams even, IIRC), which is exactly the kind of thing that you want to help people bootstrap themselves, but people still just aren't doing it.

Comment Re:CRT's (Score 5, Insightful) 358

This. I came here to say the same thing, but you already had. Every single modern graphics card is very efficient at scaling textures, and in fact, LCD scaling these days most often ends up happening on the GPU anyway. Don't touch my screen resolution. Ever. If the goal is to get better performance by rendering at a lower resolution, then render at a lower-resolution offscreen buffer and scale that up to the screen resolution.

I wish Wine had a mode that did this for Windows games that expect to change the screen resolution and don't play well with Xinerama. These days I end up using the "virtual desktop" wine mode with per-game settings and KDE's window override support to put it on the right display head and remove the borders, but it's a suboptimal manual solution. The Linux game situation is slightly better (they tend to be configurable to respect the current resolution and usually get the display head right), but still don't have scaling support.

Need inspiration? Do what video players (particularly mplayer) do. That is how fullscreen games should work.

Comment Re:Count me stunned (Score 1) 91

I've checked the code, and this is in fact the case. All it does is marshal the function arguments into a buffer and send them off into the GPU core, for every OpenGL function.

For the people who want open drivers because of e.g. linking problems, the ability to rebuild them, ABI issues, etc, this is good. For the people who want open drivers to improve them, fix bugs, see how they work, add new APIs, etc., this is useless. It certainly isn't an "open source driver"; it's just "open source ARM libraries". Broadcom is just as closed as always, they're just getting better at making that closedness play nicer with open source folks.

Personally, I would be happier with a platform that implements graphics like this than with a platform where the userspace blob is where the fun is and will never get open sourced (like most other embedded Linux platforms), but I still won't be buying a Pi. There's something about booting via the GPU blob that still disgusts me - I shouldn't have to use their proprietary firmware to just boot Linux. If only they hadn't made the boneheaded decision to design an SoC that boots from the GPU core, and instead booted using the ARM CPU like everyone else, then we could just throw their GPU "firmware" in /lib/firmware like everything other device driver does and I'd be perfectly happy.

Slashdot Top Deals

Lots of folks confuse bad management with destiny. -- Frank Hubbard

Working...