Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Re:What system d really is (Score 3, Insightful) 928

For those who think SysVinit style init systems is what Linux should be using the next 30 years, there is Slackware. It is a nice general purpose distro that is very traditional. So nobody is forced to use systemd if they don't want to.

Until some key functionality used by people is no longer available in that distro due to decisions made upstream to no longer support the code base, or other dependencies.

If I use KDE - which I do - then packages for that become unavailable at some point in Slackware given the above. That means I will be forced to use systemd if I want to continue using KDE; which also means I will have to change distributions, assuming Slackware remains systemd free, as well.

Not trivial. Not easy. Not freedom of choice.

It simply solve a lot of real world problems and makes life easier for both upstream developers, distro makers and end users.

That is simply a lie.

Comment Re:No, it's not time to do that. (Score 1) 299

While, in principle I agree with you (I learned at a university that had several MIT PHDs in the computer science department - one of which was the head of the department - and later in my career when interviewing new candidates, and working with people from what I will call 'sub standard' programs - I saw first hand that all CS degrees were not equal) - I also realize that the people coming from the top university are going to gravitate to where the money is - meaning if you are a small company, or in a company that can't attract the top talent you will be stuck with what you can get.

This situation isn't bound to change, so how do we deal with this? I think the solution should be multifold and systematic to have possibility of success:

1. Every programmer shouldn't have to be a system developer; partition your developers into two camps: a very small group of system developers (for OS, building development tools, and embedded work as needed), and a very large group of what I will call 'application' developers (for applications end users will touch).

2. Limit the tools your Application Developers have available to them. They shouldn't be able to shoot themselves, or their users, in the foot.

3. Focus your System Developers on building tools and libraries for selected application class languages that do not allow Application Developers to reinvent the wheel for things that they shouldn't be - such as memory management and security (the aforementioned 'loaded guns'). Enforce standards for access to and use of the tools by the Application Developer group.

4. To avoid having developers working on software that will only benefit one user - provide safe tools to allow end users to build their own simple applications (e.g. Hypercard - or other paradigms that are appropriate, above and beyond spreadsheets and word processing software). Once an application created in this way becomes popular enough - you have to option of translating it through your application developer team...or leave it as is. If you were smart when you built your (hypercard-like) tool - you allowed it be integrated with, or translated to other systems in safe ways - so that it can be shared with minimal impact/workload.

If we don't find a way to something like what I describe, we will continue to suffer as we keep expecting all CS graduates/programmers to be equal. HR and Execs don't like this because they want all developers to be interchangeable widgets...but reality does not bend to policy.

Comment Re:For the rest of us (Score 2) 299

I had the pleasure of access to and use of an Amiga 1000 in '86. It hosted many of my firsts in computing:

First use of a graphical interface

First use of virtualization (hosted an IBM DOS virtual environment in a window - used for running and building DOS applications for IBM PC - the Amiga OS was perfect for this - since it virtualized it's own components as well)

First filled polygon video game (3D up to that point was wireframe)

First real multimedia PC used

First use of a PC with a multitasking operating system

First use of full featured embedded scripting capabilities in an operating system (MS DOS batch processing doesn't count)

After using the Amiga, nothing that followed really surprised me - but most commercial solutions I found limiting in one way or another (e.g. Windows 3.1 lack of preemptive multitasking).

In '95 was looking at OS2 Warp as a better alternative to Windows '95 [I wanted something that had tools I could quickly be productive with; I spent many hours with the Win32 API bible with little to show for it - and J++ just was a fail from an interoperability standpoint] - when I was introduced to Linux - which had what was missing, and dovetailed nicely with my studies at the university - (the computer science lab was well equipped with Sun Solaris machines - and we did all of our development coursework on Unix as a result - and when I got Slackware 2.3 up and running - I started dialing in my projects from home - my first exposure to telecommuting).

The biggest lesson I took from my experience with the Amiga is that being productive with a computer should be easy - and if it isn't then you should look somewhere else until you find it. It may seem counterintuitive given that I ended up with one of the most difficult distros to install at the time. Having a built-in tool set in the form of command line scripting, and other extension languages, in addition to the core system programming languages was key to my own efficiency in getting things done. That being said - today, even for someone who knows how to program, finding easier/quicker ways to get work done is valuable. Everyone is not a computer scientist - and shouldn't have to be to make working tools for themselves easily. While projects have addressed subsets in this arena (spreadsheets, wordprocessing etc), no one has address the fundamental problem of creating a malleable tool for general purpose use - that I am aware of.

IT departments in large companies, and the shrink-wrapped software companies do a good job of accomplishing large projects and particular popular niches (standard office suites) - but they are horrible when addressing the unique needs of the individual. That's where something like hypercard would find a home.

Comment Thinly Veiled Attempt... (Score 2) 928

The only good thing I can see about systemd is the exposure of some Linux system APIs that were not exposed via the POSIX subsystem. Nice - but not required by most of us - and could be added to existing standards based initialization daemons without totally rewriting the rules.

Otherwise it seems to be more likely a thinly veiled (actually not veiled at all...given comments of the principles) attempt to fragment the POSIX world - and forcing projects with limited resources to make a Hobson's choice of whether to support systemd based Linux or POSIX standards exclusively. It breaks write once - compile/run anywhere - that was generally available for those who made sure their applications were POSIX compliant. This means that a lot of software that was available across Linux, and Unix flavors (BSD) will now be exclusively available on one or the other - thus fragmenting the *nix world.

Software is not separate from the ethics that surrounds it. This approach and apparent rabid anti-interoperability view is arrogant and self-serving at the expense of cooperation and choice. Furthermore, the monolithic architecture, obfuscated binary logs, and centralized configuration are antithetical to the Unix way - and makes a linux system as difficult to deal with as a Windows system from an automation and management perspective, and raises concerns in terms of security (the greater the complexity in a system, the greater the opportunity for bugs - and thus the greater the attack surface).

Finally it throws away many many years of experience/knowledge acquired by system admins, developers, and users about how a *nix system operates and is configured. This fragmentation of the human factors aspect will by its very nature cause faults/issues during operation.

So - for a host of reasons, I believe it is technically - and more importantly - ethically wrong.

There is actually one more good thing I can think of: it will spawn new distros, software projects to provide alternatives of various applications in the stack, and perhaps new operating systems altogether - with a renewed focus on design simplicity (KISS) and all of the benefits that come from that. Once a system becomes too complex to understand - are you sure you can trust it? So to recap: systemd has two things going for it; exposure of Linux APIs, and the power to breath life into the further exploration of alternatives in the OS/application layer.

Comment Re: are the debian support forums down? (Score 1) 286

But, why can't I just rip out systemd? Oh - because so many service projects/distros are only supporting systemd today that you have to have it around if anything you download in the distro happens to use the API of the non-POSIX POS that is systemd.

systemd core files are not written to disk as files - they are written to the binary log file - you have to extract the data first to run debug.

systemd log files are binary; you can't run grep or other text parsing tools against it for automation - unless you extract the data first.

systemd encourages abandonment of POSIX compliance - which is a key component of the interoperability between various flavors of Unix and Linux (I loved being able to write a shell script on a Unix machine, and copy it over to a Linux machine with little to no modification). Dennis Ritchie must be spinning in his grave right now at this bastardization of his brain child.

The only way to avoid this is to roll your own distro - or support distros that stay clear of it (I was shocked to hear even Slackware was considering support for systemd - given that it has always been as close to SystemV Unix-like that you could get in the Linux world. Thankfully - so far they have not succumbed.)

For people who run desktop machines for their own use - running applications in user space for the most part - systemd may be fine. For those of us running servers, with many man hours of system administrative automation in place - this spells catastrophe in the form of forced obsolescence of our custom code and automation.

As I read in one article - if systemd is allowed to prevail, then we can all kiss the days of an administrator controlling his system his own way goodbye. It will split the work of people who do development - and at some point they will not be able to continue; one case in point: http://alien.slackbook.org/blog/on-lkml-an-open-letter-to-the-linux-world/

From that article:

Last week I asked the SDDM developers to reconsider their decision no longer to support ConsoleKit because Slackware does not have systemd or logind and thus we need to keep using ConsoleKit. The answer could be expected: “answer is no because ConsoleKit is deprecated and is not maintained anymore” and therefore I had to patch it in myself. Of course, the ConsoleKit successor systemd-logind, written by the same team that gave us all the *Kit crap, depends on PAM which we also do not have in Slackware. One of the fellow core developers in Slackware, who is intimately familiar with the KDE developers community, has heard from multiple sources that KDE is moving towards a hard dependency on systemd (probably because they are going to need the functionality of systemd-logind). We all know what that means, folks! It will be the day that I must stop delivering you new KDE package releases for Slackware. That’ll be the day.

So this turn of events might be nice for some script kiddie sitting in his mother's basement....but for the rest of us who have to get work done with and through Linux - this is a royal pain in the arse.

Comment Re:Are you patenting software? (Score 0) 224

While you say you aren't going to wield your patents offensively, we can't be absolutely sure you won't. It really comes down to an ethical choice to avoid them altogether, and fight them vigorously when they do come knocking on your doorstep, or to embrace them in the sake of 'defense'.

Power corrupts - and so does the power acrued from misapplied patents.

Comment Re:API consistency; negative tests (Score 1) 51

I don't disagree with your overall premise: bureaucratic 'big design up front' methods don't work except for an exceedingly small subset of problems in the real world.

However, you largely ignore a key point that I think the IEEE is (belatedly) trying to address: our focus from a design perspective to this point has been first meeting the functional design criteria, and lastly security (if you have time to deal with that at all - which in my experience ends up being the first thing that gets cut when time is at a premium and pressure mounts to ship).

Security has to be seen as a core function of every application that plans on communicating across a network, and also for many that don't by necessity, due to their incestuous relationship to other systems running on a machine that do. I also think that if you start your overall design with security in mind - that will influence various factors of the design - from API construction, to modularity, to the design of the tools, and operating systems the resultant applications live on/in.

To do that well without any framework or controls would require every application programmer to be a top notch systems developer. In my experience the vast majority of professionals in the application development space will never rise to that level of expertise. But code must be written - and applications deployed as the appetite for more and more automation does not abate. There are not enough programmers competent in systems development to do the job without any help. So, what do we do?

I have a pretty good idea about what I think should happen - but I'm curious, what you would do given that reality (assuming you can't guarantee deep competency)?

Comment Re:It should be dead (Score 1) 283

This brings to mind the purposes of creating and using code in the first place:

Where you are the only person that needs to see and understand it - this is fine; it serves your purposes happily for you.

On the other hand, where the purpose of creating and using the code extends beyond one person, this structure does not serve that need effectively. This is primarily because, while it may function, it is too brittle to be maintained by a team of developers over many years through many iterations in design without considerable time and errors generated in the process.

From seeing things like this, I would argue that there are way too many clever programmers - and not enough smart ones.

Slashdot Top Deals

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...