Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:Is systemd more complex than it needs to be? (Score 1) 469

Sometimes complexity is required to make things simpler down the road.

I cannot see this being the case here. I read a bit on the net a week or so back (think it was on the 'boycott systemd' website) that actually pointed out that the existing init system is a lot more complex than such a key part of the system needs to be. An init that does nothing but start one other process and then makes sure to notify of any processes needing reparenting or reaping on a specific Unix or Network socket to whatever process is listening would be better.

Why? Because then the system would be modular and survivable - the process handling being the "orphaned process parent" could be separate from the "reaps zombie processes" part of the system. The part that handles actually starting up system services and making sure they are running will be separate from either as well.

This is a clear separation of duties, simplifies all the different parts of the code to make them easier to check for correctness and ensures that it would take something going monumentally wrong for the system to crash because something happened that tried to "kill init".

I could, probably, write the basis of this entire system myself in a month or so of concentrated coding but would first have to start with an equally long period of research and planning to make sure I knew exactly how the parts were to interact, how to make sure that the orphaned processes were properly reparented, etc...
While I could probably find the time to do this, I do not, currently, have the time to do this and there are certainly people out there with more skill at writing important system code like this than I am.

(Come to think of it... Before I spotted this post and had an unnatural urge to respond, I recall someone mentioned Solaris' SMF as being similar to my proposal. Which means that at least 3 (at least somewhat) well educated people think that this separation of duties into multiple modules is a good thing)

Comment Re:Is systemd more complex than it needs to be? (Score 1) 469

Every Android device I have has USB debugging turned on and is set to allow ADB connections to my other machines and I can do a "logcat" or even "adb shell". This does not help when the device gets stuck somewhere in the boot process and just hangs (and I have had this happen). This also does not help when the device has decided that it is no longer going to allow USB connections. It also does not help when the device freezes and nothing is shown in "adb logcat" and an "adb shell" hangs.

The fact that I can later go in and get a copy of the log, however, does help immensely.

When I try to start a service and it does not start the reason why should be logged in the system log or the programs own log file. I should not have to use commands to communicate with the init process to find out the reason. Whats more is that the error messages I have seen when I have had to interface with systemd have been singularly unhelpful and in one case actually pointed me in the wrong direction when looking into a solution. Now this may not be different from if the data had been logged to the system log or an applications logfile, but I have seen much more helpful errors from systems using sysvinit or upstart. I have, admittedly, also seen similarly unhelpful messages from those two, but at least those messages were stored in a place and a format that would (hopefully) survive a panic or oops - and even be much more easily reconstructed if the file is damaged.

I have nothing against replicating logs over a network or other connection and I have nothing against having to go and download those logs manually over FTP, TFP or even using SCP. But by doing things like storing the logs in a database instead of as a flat-text file you ensure that my job of finding out what went wrong and working to stop that from happening again just got harder because I need yet another tool to read the logs.

Comment A perspective (Score 2) 469

It does not matter what Apple does - there is tight controls on the system and code contributed to Darwin might never see life as part of the actual Mac OSX released by Apple.

The init process in Unix and Linux is a very special thing - if it ever crashes, so does the whole machine. It is the first process started by the Kernel after it finishes initializing the system and mounting the root filesystem and is the "parent" of any process that deliberately orphans it's child-processes before exiting itself. It is also in charge of "reaping" any zombie processes. With each addition to its duties the area for an error to happen increases, which increases the risk that the system will just suddenly crash.

This is a problem that people lambasted Microsoft for from the birth of Windows through it's transition to a 32 bit system and still continues to this day for an extent - that they had one piece of the system doing a lot of work and having a wide surface for errors but still critical for everything to work properly. I am loathe to praise Microsoft for anything, but I must praise them for this - they have been moving away from that and I have actually found Windows 7 to be pleasantly stable - to the point of actually being able to recover from the graphics drivers crashing and managing to recover and re-load and re-initialize the system without losing any data. They have turned things around and begun to compartmentalize things to an extremely high degree - lowering the amount of code in each component, raising the ability of their programmers to actively find and fix the bugs during development and lowering the chance that a user will actually hit one of those bugs.

That compartmentalization and modularity (of the userspace, at least) was a hallmark of the design of Unix - each piece did exactly one job and did it well - as well as being as small as possible to reduce the amount of space there was for an error to be in. Yet with this "systemd" we see Lennart Poettering leading the charge to turn that around in order to save some small fraction of time during the boot process. That he created "Pulse Audio" - which appears to have flopped severely and, at least for me, was the cause of a lot of problems (not just for me - I remember finding thousands of people finding that problems were being solved when they got rid of Pulse Audio when I first started having those problems) - seems to be lost on people.

No, one mistake does not make everything someone does wrong, but in this case sacrificing simplicity and modularity so you can "do the boot process faster" is a wonderful idea. Extending the system that does that so it subsumes some of those processes it used to start in parallel for a faster boot time is just idiocy. What's next? Am I going to find that I can send a "draw an ASCII art of a kitchen sink" command to systemd and have it take over the current TTY to do just that?

If systemd was just a system for organizing the boot process, adding some complexity to simplify much more and making sure that daemons providing services get run in parallel to boot the systems boot time... Well, I'd have absolutely no objection to it. Instead it has subsumed separate projects and forced people to fork them or recreate them if they do not want to use systemd. Further, it appears that people that don't like systemd but like Gnome will soon lose the option to run new versions of Gnome because several key components of Gnome now have a hard dependency for systemd or one of it's sub-projects. Open Source is supposed to be about choice and by forcing people to use something that they might not want to that choice is being taken away.

So, with all the sarcasm I can muster, all those systemd supporters out there and it's creator have my hearty applause for doing something that goes against one of the key tenets of the Open Source movement. Bravo! You've successfully taken away something from people that you claim to be supporting. What a show of hypocracy!

Comment Re:Not gonna fly (Score 5, Informative) 166

Simply connecting to the same swarm does not mean you are acting in concert with any given other member of the swarm. In fact, you can only interact with a relatively small portion of a large swarm at any given time.

In this case, however, there is no proof that any of the Does were even *ONLINE* and in the swarm at the same time, regardless. Due to large amounts of legal precedent from other cases (there are several pages of precedent cases quoted in the motion) it is clear that for you to be considered "acting in concert" with another member of the swarm you have to have actually exchanged parts of the file(s) in question with them. In this case there is zero proof that the Does "acted in concert" as the law requires.

And anyway, the burden on the defense that any of these "joined" 'John Doe' copyright cases places on the defendants - in effect causing there to be a separate "mini-trial" within the overall trial for each separate defendant... To put it simply, there is no guarantee any of the Doe's in any of the cases will be using the same defense. This will place an (and this is a legal term) "unfair burden" on the defense if a given case remains joined.

But thats besides the point. The "joined Doe Case" is used for an ex parte expedited discovery period so that the plaintiff can then attempt to gain an out of court settlement from the defendents. They then dismiss the "Joined" case and file separate actions against each of those defendants that refuse the settlement offer. And since it is not a "sure thing" they don't want to do that - because the only identity you can get by serving a subpoena on an ISP is that of the account holder - who cannot be proven to have been the actual "criminal" in the case. In fact, it could be a "Significant Other", child, relative, friend or even neighbor of the account holder. Hell, in the case that the account holder has either an unsecured network or one secured with an easily broken security system (such as WEP) the actual "criminal" in the case could have been a complete stranger sitting outside the account holders house in the dead of night.

Because of the numerous possible defenses that can be argued by each individual defendant, separate cases should - technically *MUST* - be filed instead of a joined case like the one in question. These points are actually made in the motion and they make so much sense that I can see the case severed and the subpoena's on Does 2-5 quashed. (with any information gained via those now quashed subpoenas destroyed and made illegal to be acted upon)

In case you have not read the motion, please do it before posting. Its shameful to not have all the facts available when you attempt to argue something.

Comment Re:Sounds a little hokey (Score 5, Informative) 166

Nope. A swarm is nothing like a conference call, because you aren't interacting with every member of the swarm, but just a few members at a time - restricted by how many connections your bandwidth can actually handle at a given time. Regardless, the reported "offenses" are so separate in *TIME* that there is no guarantee that the Doe's in this case were actually online and part of the swarm at the same time as each other. Even if they were the chance that they actually shared parts of the movie between each other is so low as to be nil.

You really should read the documents linked to - they might be in legalese, but it is close enough to english for just about anyone to follow. And it does explain things simply enough for anyone to be able to understand them. Yes, there is an explanation for how a BitTorrent swarm works in the actual motion and it was written in such a way as to be understandable by any of the legal professionals - including the judge in the case.

And regardless of whether or not "participating in the same swarm" is a legal theory that holds water and fulfills the requirements of the rules, well... There is the fact that these "joinders" benefit only the plaintiffs in the case and create hardships for the joined defendents that break the required "fairness" of the legal system. (Yes, the legal system is supposed to be fair - surprising, no ?) So the joinder should be undone anyway :)

Again, read the actual motion. These arguments are covered in depth and explained in excruciating detail inside it. To tell the truth, I will be surprised if this motion doesn't go through. Doe #4 has a *LOT* of legal precedent on his side :)

Comment Re:bullcrap (Score 1) 475

Nice qualification, there, but... There is no question that this is an abuse. The images that BuckyBalls could have any claim to are publicly available anyway and the usage of the name is in reference to the product. So... There is no way that the company could have filed a takedown notice without perjuring themselves. And yes, it is perjury - a DMCA takedown notice is also a sworn affidavit that you have the legal right to actually file the notice. (ie: the video or music in question violates a copyright you hold or that you have been hired to enforce the copyright holders interests)

Further, the DMCA contains some strict prohibitions against mis-use and some nasty punishments in any case where it is mis-used. Since the people at Zen Magnets hold the copyright to the video, BuckyBalls (or their parent company) have no copyrights that are being infringed and hence have no legal standing to file a DMCA takedown notice on the video. QED: If the people at Zen Magnets handle this correctly, the "Goliath" of the "magnetic sphere" industry will feel the pain of having an idiot for a CEO.

Comment Re:Ridiculous. (Score 1) 422

The thing is that an FPS or a benchmark generally stress the entire card - testing the 3D rasterizers, the shaders, etc... all at the same time. With this problem - which Blizzard actually acknowledges - what is being stressed is one part of the card - the part that handles outputting a mostly static 2D image. The card, as a whole, might be designed to withstand abuse, but if the card has a single part being stressed hard, then it will likely violate its TDP and, in systems with poor airflow and cooling, cause the card to overheat and possibly "melt".

While I cannot see this "melting" being a problem for more than a small number of people it is likely that even some gamers that have designed their systems to run, say, Crysis, at more than 60fps could see problems. And there are a lot of people that have done that - the thing is that it might not be a problem for most people, but it will be enough of a problem for the "serious gamer" target audience for SC2. Anyway... You are correct in that most games run without caps - they don't need it because their inter-frame processing effectively caps them.

(*prepares to get modded into oblivion*)

Comment Re:Penalty of Perjury (Score 1) 153

That does not make my argument any less true. The argument was that claiming "but I thought it was valid" makes filing a false DMCA notice totally legal. If "but I didn't know it was illegal" made breaking the law okay, every criminal would do it.

And thinking about it, there is a well-defined range of DMCA notices that are perjury. Which just reinforces the widely held belief (I can't call it a fact as much as I'd love to) that the DMCA is a bad law. There is nothing that truly stops it from being mis-used - which, to me, would make it "unconstitutionally vague".

Comment Re:Penalty of Perjury (Score 2, Interesting) 153

And there is no need for any other coverage, really. If you send out a DMCA takedown and do not hold copyright to the material you are demanding be taken down - and have not been authorized to "act on behalf of the copyright holder" - then by having filed the DMCA takedown notice you have perjured yourself.

It's not hard to understand - this does mean, however, that every bad DMCA Takedown is prosecutable under extremely well-known law.

Comment Re:Time Bandits (Score 3, Informative) 184

That's just talk. I've tried to leave home users with Ubuntu before in the past. There's always something that goes wrong and is absolutely impossible for a home user to solve. It's just too *big* and has too many points of failure without the organized support backend of something like the Windows Platform. Open source offerings will get much better when they simplify and reintegrate.

You have obviously never run into the quality of people I have. I quite regularly get calls from family, friends of family (and their friends, who I've done work for) to fix their windows machines. So while this isn't FUD, it isn't unique to Open-Source at all.

And the integration thing? It is happening all over the place. It used to be that Gnome used CORBA and KDE used DCOP to do things like provide application interfaces that could be hooked from other applications (to, say, be able to control your music-player from your IRC program) and also provide a sane place to find some system data. Now they both use DBUS. Which actually provides a lot more flexibility for the interfaces than DCOP or CORBA ever did... And it also ties into the 'hardware access layer' so that you can find information about all the hardware in the machine in one place.

But really... It sounds like you are calling for projects to merge so that there is less choice and more talent focused on individual projects. I hate to say it, but that will never occur

Actually, it's not a conspiracy. At this point, Windows is simply more user friendly and usable. I suspect Haiku will overtake Windows in usability before the Linux desktop does, it just has a broad natural advantage in terms of architecture. You certainly can't take away Linux's server utility, though. It will always be firm in that market.

I see you took out the whole section where I covered the well documented cases of MS abusing its position in an attempt to force existing competitors out of a specific market (or out of business entirely) and to keep new competitors from entering that same market.

The moment Linux came even close to being usable, Dell and HP picked it up as options. Those don't do that well on the market. I would say they put exuberant faith in it to offer something like Ubuntu on a consumer machine. It certainly doesn't belong there.

Oh, I see... You are trying to make the claim that because Dell and certain other OEM's now offer Linux that they didn't because it wasn't ready. Sorry, but you fail your history check - part of the US DoJ's case against MS was that they did things like threatened to revoke bulk-licensing deals if anything other than an MS OS was offered as an option for a new machine or required that every machine - regardless of whether or not it shipped with an MS OS installed - be counted when it came to calculating the price of the bulk-license. That is what kept Linux off of machines from major manufacturers.

I've never owned a machine that worked with Linux without incident. Never. My current laptop, for instance, the Gateway LT3103u, does not work well with Linux at all. Its battery life and power management under Linux are especially dismal- and this is pretty ordinary hardware. It's actually losing quite heavily to Vista on this machine. I find that hilarious.

I've run Linux on three different laptops now. All three of them were "new" when I purchased them. All three of them have had better battery life in Linux. One of them reported having 4+ hours available at full-charge in Windows, but would last maybe 2 hours - where in Linux it reports just shy of 3 hours and actually lasts just shy of 3 hours. So YMMV - this is why I suggest people not trust my anecdotal evidence without doing research and/or testing.

It sounds like you haven't used a Windows system since Windows 98. I can tell because you mention the system rebooting to install a USB device driver.

Last version of Windows that I actually ran in more than a VM was XP (Service Pack 2) and that one required me to reboot when I installed USB web-cam. No, it should not have required it, but it did.

Windows users don't have to do research to know if something is supported on their system. Almost any device you buy includes a driver CD. I don't think it's terribly complex. It will even update WHQL drivers through Windows update. On Windows 7, you can basically just rely on Windows to find all its own drivers online.

I do no research to know if a device is supported by Linux. And no, this isn't because I have been using Linux for years. You make a statement about the driver CD... Well, I have never needed a "driver CD" for a piece of hardware in Linux. So your argument there is a non-starter.

It's not really the consumer's job to do this, though. Your OEM is supposed to handle all the basic driver packaging for your PC.

Exactly. And when I purchased my current laptop from Dell and told them to include Linux on it, it magically works flawlessly. I am not, now, running the version of Linux that shipped on the machine - but it is still fully functional. So what, exactly, is your point?

And it fails. I think Haiku has a better shot of becoming a usable desktop os. It's designed for the desktop, it has sane and stable driver API's, and it works with multimedia instead of against it. With the open source development model, it can manage slow adoption financially, unlike Be.

I have to agree with you here. But Linux fails because it does not (and has never) tried to hide any of the power of the operating system from the user. It also was not designed by a small team with a single vision of what the OS should be.

I also have to agree with you on the 'Sane and Stable Driver API' bit. However... With the move to providing more and more interfaces for running drivers directly in user-space I am seeing a change here. And when those interfaces for running drivers completely in userspace are solid and complete, then I will agree with the Linux Kernel policy of "no stable driver ABI".

I try every new generation of Ubuntu. It's always a terrible disappointment. The UNIX platform is simply too ill adopted to desktop usage scenarios. It's too much of a stretch.

Really? Let me call your attention to Mac OSX. It is a Unix operating system - it has passed all tests and the people that control the UNIX trademark have given Apple the go-ahead to brand it as a 'Unix'. So I guess your contention that UNIX isn't a good base for a desktop OS is dead in the water.

About Haiku... I tried BeOS back in the final days before it died. I absolutely loved it. The P3-500 I was running it on ran like a Dog when running Windows (IIRC, I had 2k installed) and felt like a new machine when running BeOS. So I am hoping the Haiku can recapture all of that feeling. If I had a machine I could spare for it, I would actually give the current Alpha a try on bare-metal.

 
 

--
The above post contains data from both third-party sources and anecdotal evidence. Do independent research and/or testing before trusting either.

Comment Re:Time Bandits (Score 2, Interesting) 184

So the fact that numerous people - myself included - use Linux every day for basic tasks doesn't make it usable? The fact that my somewhat technophobic and highly computer-illiterate father often borrows my linux laptop for browsing the web doesn't make it usable? That my sixty year-old mother has requested that, for any computer I build her, I make sure Linux is installed doesn't make it usable?

Linux TRIES to compete as a consumer product. What happens? Well... Back when the Pentium was new MS told OEM's one of two things - either 'we are going to charge you for a license for every computer you manufacture, regardless of whether it has our OS on it or not' or 'If you sell computers with any OS on it that we did not produce, you will lose the bulk-licensing price-cut we give you'. In other words they forced computer manufacturers to put MS created OS's on their machines - forcing any competition out of the market.(*)

And now MS has forced so much mis-information into the public mind that teachers confiscate copies of Linux install disks, claiming they are illegal... Best Buy employees are taught that Windows is the only way to go... The public is taught that a machine of a lower-price is better than a Mac - despite the fact that the Mac generally has a better processor, better video card, bigger hard drive and more RAM than the lower-price machine... MS basically does everything it can to keep the public from ever learning the truth about alternatives to its stranglehold on the OS market.

Yes, Linux has historically had problems handling new hardware. But these days it can run a wider range of hardware correctly than Windows can. I don't really even have to research things anymore - if I go to Wal-Mart, Best-Buy or any other big chain store and purchase a piece of hardware, it is almost certain to be supported by Linux. As an example... Two weeks ago I walked into the local Wal-Mart here, purchased a web-cam and an MP3 player without ever doing research about supported hardware or even looking for a "works with linux" stamp on the packaging. I get home and plug the web-cam in - and it works(**). I plug the MP3 players dongle into the device and into the laptop and it shows up. Everything fully supported. In Windows I'd have had to install the drivers and reboot before I could do anything.

*:Yes, they got hit with a lawsuit over this by the US DoJ, but it didn't have any real, lasting effect.

**: Okay, so it took a quick search of Google to get the camera working with a couple of programs. But at least the information was there and didn't involve things like 'edit registry key...' - in fact, the solutions were short and to the point.

--
This post contains factual information from third parties as well as anecdotal evidence. You should not trust either before doing research and fact-checking.

Comment Re:Bad Summary (Score 1) 357

HTML5 with its XML encoding was just a "convenient example". The specific example in question has NO bearing on the fact that the XML dialect with the features described in the patent is obvious. It is so obvious, in fact, that numerous word processors have supported the features in various forms in their binary file formats. What that means that including them in an XML version of the same type of file is plainly obvious. Not just obvious but so obvious you'd have to be completely brain-dead to NOT include them.

Or are you still going to try and argue that my point is wrong because I gave an example that you want to find fault with?

Comment Re:Bad Summary (Score 1) 357

Sorry if you didn't manage to read my entire comment to understand what I was actually saying. What I was saying is "HTML5 and its XML encoding are blindingly obvious, just as the XML dialect covered by this patent is blindingly obvious" - ie: the USPTO's "obviousness test", which, apparently, the USPTO is not fit to apply to any patent on software or relating to software

(the "obviousness test" is simple - "is this obvious to someone skilled in the field ?")

Comment Re:Bad Summary (Score 3, Informative) 357

You, apparently, have missed out and not read the actual claims of the patent. This patent covers any XML document which has an XSD definition and has:

  • rendering hints via an element or property of an element
  • a bookmarks element (of which two must be used to be valid)
  • a comments element
  • a 'text' element
  • a 'style' element
  • a 'font' element
  • a formatting element
  • a section element
  • a table element
  • an outline element
  • a proofing element

And any variation of implementation on the above. It also covers the manipulation of a file meeting that description on any computer—whether or not it has the program that generated the file installed.

The thing is... this patent can be read as covering HTML5 in its XML embedding and it completely fails the "obviousness test". How does it fail that test? Because it is, simply, plainly obvious to "one skilled in the field". A lot of the above features have been proposed for ODF and are braindead to add to ODF or any other XML format. Additionally XML is used as a format for storing data simply because it is a well defined format and easily manipulated--so easily, in fact, that there is a complete language defined for manipulation and transformation of XML.

Where it really fails is that it is neither "new" nor "novel. If Netscape had tried to patent the specific version of HTML supported by, say, Navigator 4 there would be as big a backlash. It'd be similar to someone implementing an open spec - say ECMA-262 - and claiming a patent on it as "new" and "novel" because it has a specific set of system interface functions.

Or maybe you'd like a car analogy... In this case it would be like GM filing a patent on a car because their car has a specific feature set as a standard that a company has not put out as standard options before. I hope you now understand exactly why people are rather pissed about this patent.

Slashdot Top Deals

"More software projects have gone awry for lack of calendar time than for all other causes combined." -- Fred Brooks, Jr., _The Mythical Man Month_

Working...