Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment: Re:Why not fork/wrap systemd to make it more sane? (Score 2) 863

by joib (#48249487) Attached to: Debate Over Systemd Exposes the Two Factions Tugging At Modern-day Linux

That being said, the design of systemd confuses me. It seems ripe for all manner of stability and security problems. As I understand it, it bundles a large number of services into a single process, which takes place of the init daemon. That's guaranteed to cause all kinds of system crashes.

What I don't get is why it isn't split up into multiple processes. All the same functionality could be provided by having a simple core init daemon that loads a set (perhaps a small set) of child processes. It wouldn't take longer to load. The services and behavior would be identical. But it would be a lot more stable, because a child process could be restarted if it crashes, keeping init to a bare minimum.

Eh, this is myth #1 in http://0pointer.de/blog/projec... . systemd consists of 69 different binaries (by now, probably more). That being said, it's true that the PID 1 (/sbin/init) is larger than the corresponding SysV init, since it does more things (e.g. cgroups for service management, support for new-style daemons, Dbus interface).

Comment: So Paul Venezia doesn't like systemd, we get it (Score 1) 863

by joib (#48248979) Attached to: Debate Over Systemd Exposes the Two Factions Tugging At Modern-day Linux

Yet another clueless anti-systemd rant by Paul Venezia, yawn.

So now he goes on an ad hominen and labels systemd proponents as clueless noobs, while serious admins hate it. Right. I, for one, in one of my duties as a professional system admin manage hundreds of Linux machines, can't wait until we finally get rid of that SysV init crap in favor of systemd (I won't rehash all the advantages systemd brings here). Due to EL7 switching, we'll eventually get there, thanks Redhat!

Comment: Re: Difficult to defend against (Score 1) 630

by joib (#46711601) Attached to: Navy Debuts New Railgun That Launches Shells at Mach 7
Old-fashioned anti-ship missiles can be disabled or destroyed by the defending ship's close-in defenses. This is because the incoming missile is filled with sensitive electronics, guidance systems, explosives, fuel, turbojet engines, stabilizing fins, etc, and is very likely to be damaged or destroyed if hit by a 20mm round from the defending ship's CIWS missile defenses.

FWIW, AFAIU the old gun-based CIWS things are being replaced by missile systems (RAM) since they don't work against modern supersonic anti-ship missiles, to say nothing about railgun projectiles. Think about it, the gun shoots a projectile traveling at about mach 3, roughly the same as the incoming missile(?). So at the outer end of the range (say, 4 km?) it starts shooting. The shells and the missile pass each other at around 2 km, at which point it starts to become pointless to shoot anymore since even if you hit the damn thing (at 1 km, this time) it will more or less continue on its trajectory due to sheer momentum thanks to traveling at mach 3. Simply not enough time for the control algorithm (Kalman filter, or whatnot) to do its magic.

So, how to defend against railguns? Well, you get a bigger railgun! :) Or ballistic anti-ship missiles. But yeah, probably quite hard to do anything to the railgun projectiles after they are launched.

Comment: Re:How does the intercommunication work? (Score 4, Informative) 208

by joib (#45869219) Attached to: Intel's Knights Landing — 72 Cores, 3 Teraflops
The mesh replaces the ring bus used in the current generation MIC as well as mainstream Intel x86 CPU's. Each node in the mesh is 2 CPU cores and L2 cache. The mesh is used for connecting to the DRAM controllers, external interfaces, L3 cache, and of course, for cache coherency. The memory consistency model is the standard x86 one. So from a programmability point of view, it's a multi-core x86 processor, albeit with slow serial performance and beefy vector units.

Comment: Re:Immersion Would Be Better For the Environment (Score 1) 87

by joib (#41720219) Attached to: How Google Cools Its 1 Million Servers
The problem is that the waste heat from server is pretty low grade; google runs their data centers hotter than most, and they report a waste heat temp of about 50 C. I would guess that the water they use to cool the air thus gets heated to at most 45 C or so. So it's difficult to use efficiently or economically. At least over here, district heating systems have an input temperature around 100 C (in some cases slightly more, the pressure in the system prevents boiling).

I don't see how this would be any different if the server would be immersion cooled with mineral oil rather than air; in both cases the waste heat needs to be exchanged to water, and even with immersion cooling you couldn't run the system that much hotter without affecting the reliability of the servers.

Comment: Re:Immersion Would Be Better For the Environment (Score 1) 87

by joib (#41720201) Attached to: How Google Cools Its 1 Million Servers
That might be a relevant argument if immersion cooling (or generally, liquid cooling of the servers themselves) would somehow be new, innovative, or non-obvious. It's none of those. Secondly, I didn't mean to imply that google would turn around on a dime, but rather that at least some of the newer data centers would use something better if available. The Hamina data center seen in those picture, for instance, was opened in 2012 and seems to use the same air-cooled hot-aisle containment design. I haven't seen "Princess Bride" (assuming it's a movie or play), so I won't comment on that.

Comment: Re:Immersion Would Be Better For the Environment (Score 2) 87

by joib (#41715897) Attached to: How Google Cools Its 1 Million Servers
Current generation google datacenters already have a PUE around 1.1, so whatever they do by tweaking the cooling they cannot reduce the total energy consumption by more than 10 %. Of course, at their scale 10% is still a lot of energy, but the question is how much they could actually reduce that by going to immersion cooling. So far the anecdotal answer seems to be "not enough", since otherwise they would surely already have done it.

Comment: Re:Ensuring the Quality of Textbooks (Score 1) 109

by joib (#41516473) Attached to: Teachers Write an Open Textbook In a Weekend Hackathon
Every teacher individually?

AFAIU, yes. (That being said, while I have teached at the university level in Finland, I have no experience of the Finnish primary and high school system from the faculty viewpoint, so you might want to double-check with someone else). Also, consider that there are something like 5 million Finnish speakers, so it's not a particularly large market, so teachers are not exactly going to be overwhelmed by the number of available textbooks. E.g. in physics I think there are about 3-4 book series covering the high school curriculum. I suppose it's a bit different in the US, where one presumably cannot assume a teacher has time to evaluate all the available textbooks. Then again, at least from over here it seems that textbook selection in the US is extremely politicized (can a biology textbook cover evolution? WTF!?) which probably isn't conductive to a good outcome either.

Textbooks must teach to the content of the abitur and the standards being established by the Bologna Process. So, I guess the curricula are well defined. But I'm still surprised that this decision would be left to every teacher individually.

Yes, the Ministry of Education defines (broadly) the curriculum, so it's not like teachers are allowed to teach whatever they fancy. But generally, the large degree of autonomy given to teachers is often seen as one of the reasons why Finland does so well in these PISA tests. Teachers over here are pretty well educated, and it's a well regarded profession. Of course, there are other reasons as well, e.g. Finland is culturally pretty homogeneous and there are quite small socioeconomic differences compared to many other countries. Anyway, it's not like teachers are alone in choosing textbooks, of course they talk with colleagues etc., and professional societies do from time to time publish reviews of the available textbooks, which I assume teachers read carefully.

As an aside, the Bologna process AFAIK covers only higher education (at the polytechnic/university level, bachelor/master/Phd), not high school. Of course, it indirectly covers lower education as well in the sense that it effectively requires that students entering higher education have certain skills.

Comment: Re:Ensuring the Quality of Textbooks (Score 1) 109

by joib (#41511565) Attached to: Teachers Write an Open Textbook In a Weekend Hackathon

I think this Finnish group needs someone who is an insider on textbook selection committees to advise them. The last thing these committees want is to embarrass themselves by being seen to recommend a work that was produced in three days. They would lose their credibility, regardless of the quality of the work.

IIRC there are no textbook selection committees in Finland. Teachers are free to choose whichever book they want; or indeed to not choose any book at all and teach the class based on their own material.

Comment: Re:Meanwhile... (Score 2) 77

Some additional points:

- FWIW, Linux finally got rid of the BKL in the 3.0 release or thereabouts.

- Many (most?) 10Gb NIC's are multiqueue, meaning that the interrupt and packet processing load can be spread over multiple cores.

- Linux and presumably other OS'es have mechanisms to switch to polling mode when processing large numbers of incoming network packets.

That being said, your basic points about interrupt latency being an issue still stand, of course.

Sigmund Freud is alleged to have said that in the last analysis the entire field of psychology may reduce to biological electrochemistry.

Working...