Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment really interesting, but not yet google.com (Score 1) 254

You know, this looks very interesting actually. I wouldn't be so bold as to say that it's "taking on Google", but it's a great idea. For example, right now it looks like yacy.net is down (maybe because of us, btw) - but you can just install it on your own Linux-based machine and you can still search the network, heck there's even an apt repository.

From there you get your own search engine, which you can even use to only search your LAN or private network. You can not only use that to search the distributed database, but also crawl your own sites, to improve the results for your community.

Think about it: no more profiling of your searches, no more dependency on a central authority for results. More importantly, no more single point of failure for the collective knowledge of mankind. Seems to me like a good goal.

That is a pretty good start, I would say. Of course we shouldn't expect the results to be better or even close to Google's, but if we all jump in and help this project, this could become something decent.

Or we could just sit back in our usual armchair specialist posture and say it's all crap. This is slashdot after all...

Comment In between maybe? (Score 5, Insightful) 185

Maybe there's a sweet spot between "no testing at all" and "replacing everything every three months"? In my experience, there is a lot of work to do in most places to make sure that proper testing is done, or at least that emergency procedures are known and people are well trained in them. Very often documentation is lacking and the onsite support staff have no clue where that circuit breaker is. That is the most common scenario in my experience, not overzealous maintenance.

Comment Re:Serious issues with this (Score 3, Insightful) 248

First issue: This is great if you have an external system to log to - if not, you're boned. This new logging system seems to cover both cases.

No, it doesn't: it does not protect you if you do not log to a another server or at least backup the hashes somewhere else. You still need a secondary server.

Second issue: One of the big reasons for doing this is to be able to detect when the log has been altered to cover a crackers tracks. Obviously, a deleted log file is easily detected and a big indicator that your system has been compromised, so I'm not seeing your point here.

Well, I was making a rather broad stroke on that one. As I explained earlier, just like with git rebase you can certainly tamper with the logs without being detected, if you are root, so this doesn't cover that case unless (again) you use a secondary server.

Third issue: As has been stated above, you can log to both the Journal and good old text based log files. That way you can still use your existing tools on the text file while still being notified of log file alteration. I agree that a common format for log entries would be nice but may not be possible since not every application logs the same kind of data. Note also that this proposal allows for arbitrary key/value pairs so some standard conventions will probably come about after its been used for a while.

Somebody else answered to this, but yeah: if you're going to file to logfiles anyways, why bother with the journal?

Fourth issue: Not sure I understand what you are talking about here... Obviously, backward compatibility will have to be taken into account by the devs. You should be able to read the files on other machines if you backed up your encryption keys, etc. (you do backup that stuff right?). By reading the articles, it sounds like the devs have thought about these issues and/or they have already been raised by others. They seem to be fairly easy to deal with.

Backward compatibility doesn't seem to have been taken into account by the devs. It's in the FAQ:

Will the journal file format be standardized? Where can I find an explanation of the on-disk data structures?
At this point we have no intention to standardize the format and we take the liberty to alter it as we see fit. We might document the on-disk format eventually, but at this point we don’t want any other software to read, write or manipulate our journal files directly. The access is granted by a shared library and a command line tool. (But then again, it’s Free Software, so you can always read the source code!)

I'm not necessarily on board with this proposed system either, but your issues seem like they've already been covered by the proposed design.

I disagree with this analysis. :)

Comment How this can be tampered with (Score 1) 248

I can mostly agree with you. There is one thing you might be missing.

[...]

I think what you are missing is this replacement is intended to prevent "undetected" tampering with the logs. Currently, a cracker can delete the log entries that would identify his or her activities on the machine, thereby going unnoticed. Deleting the log files or destroying the tools, as you suggested, would certainly be a detectable sign that the machine was compromised.

My point is: even with git if someone has access to the repository it *can* be tampered with. It's harder and may take longer than with a plain text file, but it's completely possible. With git, there's even an easy way to do it (git rebase) and I suspect that cracking toolkit will adapt and also make that easier. Note that I assume here that you save the first hash of the tree to a secure location, as documented:

Inspired by git, in the journal all entries are cryptographically hashed along with the hash of the previous entry in the file. This results in a chain of entries, where each entry authenticates all previous ones. If the top-most hash is regularly saved to a secure write-only location, the full chain is authenticated by it. Manipulations by the attacker can hence easily be detected.

If only the topmost hash is saved to a backup location, then I just need to reroll all the logs from that first topmost hash and tampering goes undetected. The only argument for this technique is that you could keep more than just the first hash (say N hashes), which we could argue goes back to logging to a different machine, for a sufficiently high N.

Comment Or to put it another way: why not Monkeysphere? (Score 2) 70

There is a project called Monkeysphere which have been working on doing this and much more with PGP for a while. They support SSL certificates in the browser (with some difficulty) and SSH host keys authentication, and generally aim to bridge the PGP web of trust with other tools to decentralize the work of certification authorities.

How does Convergence compare with Monkeysphere? Why didn't you collaborate with the Monkeysphere project instead of starting your own?

Comment Serious issues with this (Score 5, Insightful) 248

Now, without getting into how much i dislike Pulseaudio (maybe because i'm an old UNIX fart, thank you very much), I think there are really serious issues with "The Journal", which I can summarize as such:

1. the problem it's trying to fix is already fixed
2. the problem isn't fixed by the solution
2. it makes everything more opaque
3. it makes the problem worse

The first issue is that it is trying to fix a problem that is already easily solved with existing tools: just send your darn logs to an external machine already. Syslog has supported networked logging forever.

Second, if you log on a machine and that machine gets compromised, I don't see how having checksums and a chained log will keep anyone from just running trashing the whole 'journal'.

rm -rf /var/log

What am i missing here?

Third, this implements yet another obscure and opaque system that keeps the users away from how their system works, making everything available only through a special tool (the journal), which depends on another special tool (systemd), both of which are already controversial. I like grepping my logs. I understand http://logcheck.org and similar tools are not working very well, but that's because there isn't a common format for logging, which makes parsing hard and application dependent. From what I understand, this is not something The Journal is trying to address either. To take an example from their document:

MESSAGE=User harald logged in
MESSAGE_ID=422bc3d271414bc8bc9570f222f24a9
_EXE=/lib/systemd/systemd-logind
[... 14 lines of more stuff snipped]

(Nevermind for a second the fact that to carry the same amount of information, syslog only needs one line (not 14), which makes things actually readable by humans.)

The actual important bit here is "User harald logged in". But the thing we want to know is: is that a good thing or a bad thing? If it was "User harald login failed", would it be flagged as such? It's not in the current objectives, it seems, to improve the system in that direction. I would rather see a common agreement on syntax and keywords to use, and respect for the syslog levels (e.g. EMERG, ALERT, ..., INFO, DEBUG), than reinventing the wheel like this.

Fourth, what happens when our happy cracker destroys those tools? This is a big problem for what they are actually trying to solve, especially since they do not intend to make the format standard, according to the design document (published on you-know-who, unfortunately). So you could end up in a situation where you can't parse those logs because the machine that generated them is gone, and you would need to track down exactly which version of the software generated it. Good luck with that.

I'll pass. Again.

Comment Which protocols? Which algorithms? (Score 1) 242

The article doesn't say to which protocols the agencies are switching to. This could be an issue.

As weak the current (clear-text) system sounds like, there is no expectation of privacy, and officers know that. In Montreal at least, when they need to communicate privately, they exchange cell phone numbers and talk over the phone, which is considered more secure (something could be said about that too).

With an encrypted system, the officers will then expect the whole network to be secure and will therefore say a *lot* more on the airwaves. As soon as a radio is stolen or the protocol cracked, the whole thing will fall apart and then much more information will be revealed.

The more secure the system, the higher the risk.

I like the idea that the current system is transparent, and allow even ham radio operators to communicate with emergency teams in case they need to. This just makes sense. Encrypting everything seems like a bad move, and probably a business scam too.

Comment Shipping software for your computer-car (Score 4, Insightful) 215

What's different here is that Ford is now shipping software to their customers, as opposed to having their customers go back to their favorite garage and have the mechanic plug the car into a magic computer, that often even he has only a faint clue of how it works. This is a significant paradigm shift. It means that Ford will be able to manage more frequent software releases, and maybe start thinking about changing whole features within the lifetime of the car, outside of regular "oh you need to have an inspection after 100 000km" kind of things. So that's cool.

Now the bad part is that your "computer-car" stays proprietary software, and there will probably still be no way in hell that you will be able to modify that software yourself, unless you do some reverse engineering. But it necessarily opens up interesting avenues like running Rockbox on your radio receiver, or flashing some controllers with free software for some of us that are into that kind of crazy thing. I say "necessarily" because the car owners do not have the proprietary interfaces to interoperate with the car, which are a significant barrier of entry for us wannabe car hackers.

In order for Ford to deliver that software to joe users, it means it has to lower this barrier of entry, and that can only be a good thing for everyone.

Comment Semantic versionning (Score 1) 378

I am always fascinated by Linux version numbers. I can't quite figure out what they mean, and I suspect I'm not the only one. Reading that "2.6.40 is more distinct from 2.6.0 than 2.6.0 was from 2.0.0" doesn't make any sense to me. For my projects, I have always intuitively followed the Semantic Versionning principles: x.y.z, where X is a major version, Y is a minor version and Z is a patch release. You increment X when you change the API. You change Y when you add a feature. You change Z when you make a small bugfix. Simple and clear. Linux, on the other hand, seems to follow more the whims of Linus than any logical process, which seems to be a common pattern in this project, and which is not always for the worst though...

Slashdot Top Deals

"Here's something to think about: How come you never see a headline like `Psychic Wins Lottery.'" -- Comedian Jay Leno

Working...