Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?

Journal Journal: PyWeek 3

Last September I entered the 3rd PyWeek contest. I should actually say "we", because I entered in a team of 15 people. We ended up in the 3rd place, which we consider darn good for our first time.

It all started in August, in the Python Day organized by GrULiC. We invited people from the rest of Argentina, specially members of PyAr (Python Argentina user group). A group of guys there had participated at the 2nd PyWeek and gave a talk (IMO the best one of the event) titled How to make a computer game in 7 days . I expected a run of the mill talk explaining about facilities provided by some library like pygame, some useful algorithms, etc. Instead, alecu told about their experience in pyweek2, the fun they have had, the contest and its rules, the games that other participants made, the teamwork involved, and a lot of new and fascinating things. Oh, he also showed some pygame code and examples but that was not the core of the talk. At the end, there was a big group of people including me highly motivated and saying "Hey, we want to do this".

Although one of our options was to join the existing PyAr team for pyweek 3 and work together remotely (We live in Córdoba, and the PyAr team members were all from Buenos Aires, 700Km away), we were so many people that we decided to form a (large) local team. We ended up being about 15 people including python programmers, graphic artists, writers, designers, musicians, sound effect guys, etc. (with a lot of people with more than one role).

After the initial pre-week thinking about possible ideas for the different themes, the theme for the contest resulted to be The Disappearing Act, which was our favorite. We started hard work during the first weekend and had a bare prototype that got polished slowly during the week (slowly because most of us have a life, too :) ). The last weekend was a marathon which resulted in out final Saturday Night Ninja release.

We had a lot of fun. I was in the design brainstorm that came with the game idea (a ninja that must go through a maze without being seen by other guards, by using different camouflage clothes); I programmed most of the game logic, and worked in the sound FX team. I also learned (or confirmed) a lot of things about software (and game!) development:

  • Game quality has a lot more to do with "production" (artwork, design, playtesting) than the quality of the code. By the way the code was only a small part of the work involved (done by 3 out of 15 people in our case). I already knew this, but knowing something and experiencing it vividly are really different things.
  • In short projects people tend to ignore a lot of "good" software engineering practices. This is not a bad idea if it saves time and helps meeting the "one week" deadline; most of these practices intend to promote mantainability of code and scalability of design, which are not goals in a contest like pyweek. I have worked like this in places like the ACM ICPC contest (where every minute counts). However, even when mantainability is not important, and the whole code is in your head and there is not enough timespan to forget everything you have done and why, code quality matters in a team project. I put some effort in getting clean code and interfaces; it is not a beautiful design (given the time constraints), but it is not a dirty hack either. Other programmer was more relaxed about this (more being that this was his first medium sized python program) and the third one decided not to care about it. Mostly every investment in code quality payed off, allowing faster fixes during testing, and not depending on the writer of a specific piece of code to change something.
  • Even in a very short project like this, testing takes a lot of time. Specially in a game, where properties like "playability" and "reasonable difficulty" can be achieved only through testing. Our game would have been much better with some more testing; we didn't assign enough time for that. I feel it partly my fault (I knew about it and didn't push hard enough to get it done), and partly because the rest of the team was still thinking about fancier graphics and new ideas instead of test, test, test (when we were closer to the deadline and I asked "have we done some testing yet?"). I don't blame them, however: testing is boring, thinking and buliding new stuff is fun, and a big goal of all this was to have fun (and it was a great success).
  • Python is great for blending code written by three people in a rush and without fully agreeing first on interfaces.

I hope we can make some time to improve the game a bit. We wanted to go on but abandoned it after the contest... Hopefully one of the things where I will put my free time after getting my degree is in SNN (at least clean it up and publish it).

By the way, our mentors from PyAr got the first place this time with Typus Pocus . Congratulations! We plan to be a more serious contender for pyweek 4, so watch out :).

User Journal

Journal Journal: Regreso de la hibernación....

5 meses luego de mi último post vuelvo por acá. No es que no me haya pasado nada, sino en vez que pasaron tantas cosas que le dejé de dar bolilla a escribirlas.

Me recibí de Licenciado! Eso es lo que me ha llevado una buena parte de estos 5 meses... también un poco de movimiento con except, sobre todo mientras John se fue a UK durante agosto.

No me voy a poner al día en el sentido de llenar lo que pasó, aunque sí voy a escribir un poco de mi trabajo final. Soon.

Wireless Networking

Journal Journal: Cellphone

Hace algo más de un año me convencieron de comprarme un teléfono celular, para poder ser más encontrable. Me compré un Nokia 1100 (modelo cheap and nasty) con servicio prepago de Unifon/Telefónica (justo antes de la fusión con Movicom que resultó en Movistar).

No aprendo, y me sigue sorprendiendo que la gente pueda usar masivamente productos tan malos (a pesar de años de decirles a la gente que ese virus que agarraron es consecuencia de usar Windows). Sé que no soy el único, muchos de mis conocidos se quejan y tienen problemas. No sé si son los aparatos, la red, la empresa, el protocolo, la interferencia, o las tormentas solares. Lo que sé es que a pesar de tener señal los mensajes se pierden a pesar de decir envíado, o llegan a pesar de indicar que no se pudieron enviar, se escucha mal, o no se puede llamar.

De hecho le he sacado mucho más provecho a la linterna, el reloj, la calculadora y el anotador del teléfono que al servicio en sí. Casi nunca puedo llamar. Gran parte de las veces no intenta el llamado, otra gran parte no se escucha lo suficiente para establecer una conversación, así que solo podré hablar bien un 20% de las veces.

Con los mensajes de texto es un poco mejor, debe haber un 70 a 80% de efectividad. Lamentablemente no hay forma de saber confiablemente si llegó el mensaje o no (Quien diseño ese protocolo? cualquier otra red digital resolvió eso hace decadas). y las estadisticas esas empeoran en horas de muchas llamadas (por ejemplo, noches de fin de semana) o días festivos (dia del amigo, dia del padre).

Creo que he tenido más desencuentros por contar con el celular que los que tenía antes cuando arreglaba las cosas con más tiempo, por algun canal mas confiable (telefono, mail, IM). A veces es peor creer que uno cuenta con comunicación y que te falle a saber que uno no la tiene y poder planear previamente.

Y a pesar de todo esto, se que voy a estar cargando este aparato por bastante más tiempo...

User Journal

Journal Journal: Eiffel Struggle results 1 1

Last year, I submitted an entry to the International Eiffel Programming Contest 2005 (also known as Eiffel Struggle from previous editions). My submission was EWS, the graphics/windowing system/toolkit library we are using with anthony mainly for freemoo (which didn't make it for the struggle date).

The results were available quite recently. I finished in the seventh place (of ten entries), with a silver award, which made me quite happy based on my expectations and the knowledge about other serious contenders. I knew I was weak in some of the aspects judged (internal documentation, and portability)

The organization provided detailed results privately to each of the contestants. These are mine (with score, maximum score, and description for the value)

  • Installation documentation: 6/10 (Useful documentation)
  • Usage documentation: 7/10 (Good and complete documentation)
  • Construction documentation: 3/10 (There are some notes)
  • Innovation and community value (in Eiffel): 7/10 (This entry shows great promise. Don't abandon it!)
  • Innovation and community value (in OSS): 6/10 (Interesting)
  • Portability among Eiffel compilers: 4/10 (one compiler)
  • Portability among platforms: 4/10 (two OSs)
  • How easy is it to install the application or library?: 8/10 (Followed conventional standards like ./configure, make and make install)
  • How well is the entry constructed? 9/10
  • Source code readability: 6/7 (Easily understandable)
  • Source code formatting: 3/3 (Conventions (Mostly) followed.)
  • Use of the Principles of Design by Contract: 8/10 (A shining example of the power of DbC.)
  • Ease of use: 8/10 (The classes provided are well-designed. And very powerful!)
  • How much effort seems to have been put into it? 7/10 (We can see that you spend quite some time.)

I am specially proud of the score of 9 in construction :)


Journal Journal: Second systems (cont'd)

Un problema de postear tan irregularmente acá es que junto cosas para escribir, tomo nota mental, pasa el tiempo, y después me olvido. El árticulo pasado (Second Systems) lo había planeado hace mucho, pero tarde tanto en escribirlo que me olvidé uno de los ejemplos:

Scons: Esta es una segunda generación de software para hacer software building, o en realidad, el problema más general de resolver dependencias cuando se tiene un conjunto complicado de archivos generados a partir de más archivos. Todo el mundo conoce el sistema clásico para hacer esto, que es el "make" de UNIX. Hay un par de variantes más dando vueltas por ahí, pero la idea en todos es parecida. Muchos de los problemas de make son conocidos, tanto que hay docenas de parches encima de make para hacerlo funcionar, desde shell scripts caseros que generan Makefiles hasta cosas como Imake o automake.

Scons encara el problema de una forma un poquito distinta, pero creo que más práctica. Por empezar, uno de los problemas de make es que no maneja de muy buena forma proyectos con los archivos en un árbol de directorios (es decir, casi cualquier proyecto no trivial). Existen formas de tener un Makefile por directorio, e invocar recursivamente, pero son complejas de describir, y algunas cosas son imposibles (como indicar dependencias entre partes en distintas ramas del árbol). Esto hace que el building haga más cosas de las que se debe, y impide paralelismo. Scons maneja todo el árbol junto, y las dependencias cruzadas entre directorios son igual de simples que una dependencia común (en ese sentido es una generalización similar a la que va de CVS a SVN, pasar de manejar entidades sueltas dentro de un árbol a que el árbol sea una entidad). Se pueden especificar pedazos del proyecto en archivos en las subramas del árbol, para modularizar, pero después la herramienta trabaja analizando todo el árbol. En ese sentido y es bastante más fuerte que make.

El otro punto fuerte es la notación. Los procesos de build suelen tener muchas partes repetitivas con alguna variante. Para solucionar eso make tiene varios sistemas de generalizar reglas, o usar wildcards para indicar conjuntos grandes. En mi experiencia, incluso cuando está trabajando concientemente para que las cosas sean fáciles de hacer con make, esas generalizaciones no alcanzan. Con lo cual uno empieza a generar pedazos de Makefiles con otros programas, e incluirlas, o a copiar y pegar pedazos, o hacks similares. Scons tiene un lenguaje mucho más fuerte para describir estas cosas, tan fuerte que es Turing completo. El lenguaje se llama Python. Lo que también es una buena idea, porque no hizo falta aprender algo más. Para casos simples, la notación es tan compacta como un Makefile, para casos complicados, poder usar un ciclo o una comprensión de listas para generar un montón de nombres de archivos, o relaciones entre archivos fuente y objeto es muy práctico. Ojo, que no se malentienda que uno programa el build en python; lo que se hace es un script python que llaman a ciertas funciones predefinindas por scons que indican el arbol de dependencias, y las reglas de construccion, y despues de correr eso scons se encarga de analizar que partes hace falta regenerar (en base a fechas, checksums, y lo que pidió el usuario), y trabaja con eso. Es simple como un Makefile, pero permite hacer cosas complicadas cuando la tarea lo demanda.

Otra ventaja de que use python es que es facil programar extensiones para compilar distintas cosas a las predeterminadas. Además que cada extensión es un poco más inteligente que las reglas equivalentes de make. Por ejemplo, las reglitas de compilación de C y C++ saben fijarse los #includes y generar las dependencias que haga falta, no hace falta agregar un hack especial con gcc -MM generando un fragmento de Makefile que después es incluido.

Es una linda herramienta para los que usamos compiladores. He jugado con esto un poquito en programas en C, Eiffel, y cosas bastante custom (bulk image processing/conversion), se ha portado bastante bien. Tengo una charlita pendiente de esto en grulic.


Journal Journal: Second systems

En "The Mythical Man-Month", Fred Brooks habla del "Second System Effect", algo que suele suceder cuando los diseñadores de un sistema de software simple, elegante, y exitoso, diseñan su sucesor, y en el afan de resolver todos los problemas y agregar todos los features que faltaban, aumentan la complejidad al punto del fracaso inevitable.

No me cabe duda que los ejemplos abundan por todos lados, incluso fuera de la industria del software (veasé: "Matrix: Reloaded"). Pero entre el software que uso, hay muchas segundas generaciones de herramientas que realmente brillan por haber escapado firmemente a este efecto, logrando ser mejores, evitando problemas, y siendo conceptualmente más simples. No se si califican de "second system" en la acepción original (el diseño no es de la misma persona), pero son claramente second system en el sentido de intentar ser un "sucesor de". Un par de ejemplos:

Subversion: El sucesor moderno de CVS. CVS fue por bastante tiempo el sistema por control de revisiones por defecto en el mundo unix y de software libre. Todo el mundo le conoce sus problemas y limitaciones, pero es innegable que anda. Pero lentamente se ve como Subversion lo está destronando. Se usa casi de la misma forma, pero tiende a hacer todo un poco mejor. Esquiva los problemas de CVS, y a la vez generaliza varios conceptos de CVS en unos pocos más simples, más fáciles de usar, y más eficientes.

Hay algunas otras alternativas dando vueltas que intentan ser mucho más grandiosas y resolver problemas más amplios (en particular, ser distribuidos en vez de tener un repositorio centralizado), pero les quedan problemas no resueltos. Capaz en el futuro lo solucionen, pero hoy por hoy todo parece indicar que el movimiento va de CVS hacia SVN.

Trac: Me encontré con Trac hace todavía menos que SVN (poco más de un año), pero esta en rápido ascenso. No es un sucesor directo de una herramienta, sino de dos o tres. Podría ser el sucesor directo (aunque más chico) de gforge (el motor detrás de sourceforge), en el sentido de ser una plataforma para desarrollo de software. Integra un issue tracker, un wiki, un browser de subversion y un par de chiches más. El issue tracker es más simple y a la vez más práctico que el rey (Bugzilla), el wiki es comparable a los demás, y el browser es bastante más lindo/legible (lo que importa mucho en una herramienta de representación de datos) que otros browsers de SVN. A todo eso le sumamos que las integra muy bien (se puede usar la notación wiki en todos los lugares donde se puede escribir, incluyendo logs de SVN; se puede hacer referencias entre un componente y otro, por ejemplo del wiki al issue tracker, o de un issue a un pedazo del código).

Con eso queda la herramienta de manejo de procesos que esta tomando al mundo por sorpresa. Hace un par de año algunos lo conocían. Hoy todos hablan de trac, y un montón de proyectos estan migrando, porque es llanamente más simple y más efectivo que las alternativas anteriores.

Había algún otro ejemplo dando vueltas. Si me acuerdo lo agrego.

User Journal

Journal Journal: Y nuestro blogger invitado es...

No he escrito mucho acá recientemente, pero estuve posteando en otros lados que me invitaron.

El primero es uno de Master of Orion 2, parece que hay un grupejo de fanáticos que le sigue jugando, y hacen mucho de ingeniería inversa para análisis de estrategia. Comenté en un foro que tenía estadísticas del generador de mapas (las que juntamos con anthony para FreeMOO), y me pidieron que las blogueara.

El segundo (en orden cronológico) es uno llamado Team Eiffel, donde hay varios eiffeleros conocidos, y habemos un par de colados :). Me invitaron al principio para hablar de SE12, pero acabé colando algún post más.

Tengo (como siempre) un par de cosas colgadas para poner en este blog. Cuando los medidores de tiempo y ganas coincidan arriba, será...


Journal Journal: Hardware accelerated 3D in Linux

Recently, I tried to enable hardware 3D acceleration in some Linux machines. As my previous video configuring experiences were on machines with no 3D hardware or with 3D hardware that was automatically configured (by the nvidia-installer in one occasion, and by ubuntu and mandriva in others), I had no idea how to do this manually.

Documentation for this sucks quite badly. There are several components that must work together for 3D to work, but this is mentioned almost nowhere. There is documentation for each one of the components, but no global map to know which ones you need, of if you are missing some element that you should know about but don't. So I hope this account of my experience helps. I will also try to explain how to debug to know if each component is working

The whole list is something like: The DRM (a kernel module)for your card, your X server with the GLX and DRI extensions, properly setup /dev/dri/* character devices, OpenGL libraries, a video driver supporting 3D, and an the video driver DRI module. Any of these missing can result in no hardware acceleration and no clue about what is missing. Let's see one by one.

The DRM consists in a couple of kernel modules. One of them, 'drm', is generic. The other one is specific for your video chipset family (For example, you have 'i810.ko', 'radeon.ko', 'sis.ko', etc.). Most of them depend on the 'agpgart' module. Normally

modprobe myvideo

should do the work. To know what choice for video families you have check the /lib/modules/your-kernel-version/kernel/drivers/char/drm. The stock kernels include support for some cards, but if there is no support for your card you should get it separately. The nvidia installer compiles the DRM for nvidia cards; in ubuntu you also have the nvidia-kernel package (or the linux-restricted-...-nvidia-legacy for older nvidia cards).

If you have no package, you will probably need to compile it yourself. Check it out with

cvs -z3 co drm

To compile, run

LINUXDIR=/usr/src/source-tree-path make

; you need the linux source tree properly configured (i.e., after make *config). If you have a i386 kernel (like ubuntu has) a check in the makefile for the CONFIG_CMPXCHG variable will fail. Disabling that check worked for me (I only tried the VIA and Intel 810 drivers; perhaps other drivers really need that). After that you copy it to the right place (.../kernel/drivers/char/drm) and run depmod -ae.

More details about drm:

If everything when fine, modprobe of your video module should succeed, and dmesg should show a message saying that the driver detected and was enabled for your card. For example:

[drm] Initialized via 2.9.0 20060111 on minor 0: VIA Technologies, Inc. VT8378 [S3 UniChrome] Integrated Video

Once everything is properly setup, you will not need to load the kernel module manually, it will be automatically requested by your X server. If you have something like devfs or hotplug which makes the device nodes for you, after inserting the module you also should have

$ ls -l /dev/dri/
total 0
crw-rw-rw- 1 root root 226, 0 2006-02-11 11:39 card0

You should make sure you have installed the Mesa 3d libraries, an implementation of OpenGL. There are a hanful of "flavors"of mesa, for different uses, and you will need a variant that runs as an OpenGL renderer in X. Most new Xorg install by default what you need. Older XFree based distributions require a "mesa for X package"; for example debian sarge has "xlibmesa-gl" and "xlibmesa-glu". You should have /usr/lib/ and /usr/lib/

After that, you should configure the X server, enabling the GLX and DRI modules (these are X modules, not kernel modules). GLX alone enables 3D operations on the X server (you need it even for software rendering). The DRI extension is the component of X which talks to the kernel module that we just set up. Both modules come with XFree or Xorg, you have to enable them:

Section "Module"
Load "dri"
Load "glx"

If the GLX module was properly loaded, running 'glxinfo' from X will show a lot of information... if the GLX module is not loaded you will get a message (or several) indicating "extension GLX missing on display". Among the messages you will get if GLX is successfully enabled is "OpenGL version string: 1.4 Mesa 5.0.2" and "direct rendering: No". With that you can run 3D applications, with software rendering (slow). If you happened to have the rest of the video drivers needed perhaps you already have hardware acceleration working at this point. In that case The above messages will say "direct rendering: Yes" and "OpenGL renderer string: Mesa DRI i810 20050818" (note the DRI word, and the name of the card, in this case i810, in the message)

When the GLX module fails to load, check the xserver log, you should find lines saying

(II) LoadModule: "glx"
(II) Loading extension GLX

Any error message should be around those lines. A possible problem (beside errors in the config file) is that you are lacking the Mesa libraries mentioned above, or that they don't match your X server.

At this time, you should be able to run 3d apps, although perhaps quite slowly. Besides the important "direct rendering: yes/no" output, you can check that with the glxgears demo program. It is also useful to know how much acceleration do you get, because it prints the frames-per-second to stdout. On ubuntu, the FPS printing has been removed by default, so you need to run it as glxgears -iacknowledgethatthistoolisnotabenchmark. You will probably get between 50-100fps at the default window size for software rendering. You should get a few hundred on a cheap on-board video card, or even more than a thousand on a high-end graphics card.

Now is time to check that the DRI extension is working ok. On newer Xs, you have a tool 'xdriinfo' to know that. It outputs things like 'Xlib: extension "XFree86-DRI" missing on display ":0.0"' if DRI was not properly loaded. It outputs "screen 0: none" if DRI was enabled, but no kernel support for your card was found. It outputs "screen 0: drivername" if a proper DRM module was found.

(II) LoadModule: "dri"
(II) LoadModule: "drm"
(II) Loading extension XFree86-DRI

And error messages around. Not having those lines indiates an error in the X config file.

If the extension was loaded successfully, and you have the kernel module setup correctly, the X log will have several messages labeled '[dri]' and '[drm]', usually ending with

(II) I810(0): [DRI] installation complete
(II) I810(0): direct rendering: Enabled

(the driver name I810 will probably be different, and you may have some lines between the above two messages). Note that the "direct rendering: Enabled" may not match the "direct rendering: yes/no" output of glxinfo. This message in the X log indicates that the X server could negotiate with the kernel the operations needed for 3d; however, without the proper video driver you could have this step succesfully done, and still "direct rendering: No" in the output of glxinfo (and thus, no 3d acceleration). One usual cause of this, besides lack of video driver, is lack of proper permissions on the DRI device. This will be the case if you have 3d working as root but no as other users. The permissions of the DRI device are, weirdly enough, managed directly from X, with a section in the config file:

Section "DRI"
Mode 0666

These are normal unix permissions and you may also use an specific user and/or group to restrict 3d usage to a specific set of users. In the example above, any user can use the 3d hardware.

The last step is setting the right video drivers. There are two drivers required: one is for normal X clients (desktop apps), and other extends it for 3d acceleration. The first one is selected by the Driver "foo" in the "Device" section of the X config file. It is looked up at /usr/X11R6/lib/modules/drivers/foo_drv.o (or .so if you have a recent Xorg). The DRI driver is loaded by the first one, and is located at /usr/X11R6/lib/modules/dri, or /usr/lib/dri in some distributions. The name of the second driver usually ends in; the prefix is related to your video card model, but it does not always match the name of the 2D driver. For example, via_drv.o (for VIA graphic chipsets) matches with i810_drv.o (for Intel graphic chipsets), may load the or depending on which specific chipset you have. As you may guess now, you need a 2D driver compiled to look for and load the proper 3D driver, so if you have only the 2D driver adding the 3D one will usually do nothing.

With every XFree/Xorg distribution you get a bunch of 2D and 3D drivers that may be just what you need. In some distributions the 3D drivers come in a separate package (In Debian: xlibmesa-dri). However, your X distribution may not have included the driver for your card, or may have included just a not 3d-aware 2d driver. In that case, you may have luck getting external drivers. The safest way is compile, but that requires the source code for your X system, and quite a lot of work. A simpler way, although with varying chances of success, is to try precompiled modules; you can find some of them at . You should get a version of a date similar to your X server. You can get the package specific for your video card, and copy the _drv and _dri files to the correct directories and check if they work. Always backup your old files, of course, just in case X stops working. If you have problems related to library versions, the "common" package available at the same URL provides a libGL that should work with that drivers (you will have to rename your old libGL and libGLU). Choose a "common" package from the same date of the library.

Driver problems should be diagnosed by checking the X log. I would paste the messages to indicate success/failure but they seem to vary from driver to driver.

With different subsets of the above instructions (depending on what had been automatically setup and what was missing), I was able to get working every 3D card I could get a hand on in the last weeks, except for a SIS630 (on a Debian system where I decided not to risk breaking the 3D libraries installing the dri snapshot). The result was generally quite good, except for an ATI Rage Mobility in an old laptop where hardware acceleration gave me less than 10% of improvement.


Journal Journal: A soothing light at the end of the tunnel

Four months have gone since my last post here. In that last post, I mentioned about several political issues that were going around the Eiffel language standarization and its implementations.

There have been some little but interesting movements in that story. ECMA-367, the new Eiffel standard. Came out in June, making a slight blip on some radars but not much more. The standard includes some inconsistencies (some problems with non-conforming inheritance I mentioned before, and some changes in notation to avoid special-cases, which don't solve the problem but still cause incompatibilities).

Meanwhile, the SmartEiffel team declared that ECMA was forking Eiffel (which, from some point of view, is true), and they were going to keep developing an implementation of the "true Eiffel language" (which is, from some points of view, quite pretentious....).

As I said before I believed that the only way to move forward, would be having a Libre (Free as in speech) Eiffel compiler capable of forming a community around it. Visual Eiffel was too small, and SmartEiffel looked quite closed as a community. As an implementation, SmartEiffel had almost everything that was needed. The only problem remaining were the incompatibilities between the 1.x releases and the new 2.x, that made porting old software and reusing quite difficult.

On May 10th, a week after my previous post here, I emailed the members of the SmartEiffel team. I offered them to work in a "sequel" to the 1.1 compiler, a compiler from the 1.x series that would be 100% 1.1 compatible, but would introduce several changes to ease migration into SmartEiffel 2.x. That could help the community unifying into a single SE version (the 2.x branch), and being able to reuse old software, and/or port it easily.

The answer was much more open that what I had thought (I was thinking they would be reticent to have a SE fork around). They were quite happy about it, and made me feel more comfortable about branching (I could have just forked without their authorization, being GPL code, but didn't want to split the already small community). My work was going to be a "branch" instead of a "fork" (just because "branch" sounds more friendly), be developed independently, and would be called "SmartEiffel 1.2 (transitional)".

I quickly started setting things up. I found the free hosting at, which is great (free SVN and Trac, a free integrated wiki+svn browser+issue tracker that we're also using at except). I set up my site there, at On May 25th, I made a public announcement about the project at the SmartEiffel mailing list. My plan was to go on with 1.1 development at my site, with my own rules (an open repository, open development mailing list, open wiki)

Meanwhile, the SmartEiffel team started making some nice movements toward openness. They invited me to the developer team (which includes subscribing to the private developer mailing list and access to the CVS repository) so I could participate and have access to revision history that would be useful for deveolping my branch. I joined having doubts about forming part of a closed process. But there I knew that they were also planning to get open to the community, creating a public developer list, and putting a wiki online.

I finished setting up everything that was needed for development during June, and made a release (1.2r0) on July 1st. That release was essentially a "rebranded" 1.1. Some days later, on July 6th, The SmartEiffel wiki went also online. And for days later, the expected new gobo 3.4 was released. On July 23rd the open SmartEiffel development list was announced. So July brought several good news and fresh air to Eiffel developers in the free software community.

Things seems little better now in the horizon. Nobody has cared much about ECMA (with no implementations in sight). The SmartEiffel guys are improving their development process in an open-source-frindly way. SmartEiffel 2.x as a language is stabilizing and getting cleaner (2.2 is even accepting some extra things for backward compatibility that 2.0 and 2.1 didn't). 1.2 keeps improving (2 releases since) and getting more compatible with 2.2 without being less 1.1 compatible, and adding some compatibility with other compilers.

An officially accepted Eiffel standard would be a nice addition, but I don't see one coming soon. A second best can be a defacto open-source standard (that seems to work wll for things like Python or PHP). I now see SmartEiffel able to get there; there is still some way to go. Wish us luck.


Journal Journal: Quality, Re-usability, Standards and Implementations 2 2


For a long time, the Holy Grail of Software Quality has been sought. A lot of efforts have been made, but we have no satisfying solution to date; Brooks has even stated (and has a large following) that there is no silver bullet, which is usually misunderstood as "there is no solution" instead of it's real meaning of "the solution involves a lot of coordinated efforts from different viewpoints, not just a single tool/method".

Some people (one of them B. Meyer) have proposed some necessary (but not sufficient) conditions for quality. The most prominent of them is perhaps known as re-usability, defined by Google as "The ability of a package or subprogram to be used again without modification as a building block in a different program from the one it was originally written for.".

Re-usability does not improve quality per se. This is quite obvious but most people overlook the real reason that makes re-usability important for quality: work made on improving the quality of a reusable component, improves all systems using that component.

A sad story

Meyer designed Eiffel in 1985 with that in mind. All the OO mechanisms, the way that inheritance is shaped in Eiffel, software contracts, the whole design shows that it was made caring for re-usability. A lot. And that was a Good Thing (TM).

And then, in 1986, the Eiffel design was made implementation by Meyer's company, ISE. And that was the first mistake because, although the language was designed to reuse components between programs, you were not able to use that software without depending on ISE. There was no Eiffel standard.

Some efforts were made, not much later. Meyer wrote a very clear language specification, in the form of a Book, "Eiffel: The Language". And NICE was founded to standardize the library. Several implementations appeared. All should have been happy at this time, shouldn't it? Not so much. It took until 1995 for NICE to have their first standard library (ELKS95). And the standard was incredibly small; it had basic stuff like INTEGER, STRING and ARRAY, but not much more. Every compiler, of course, included a lot more than that (so you could use them to write real software). But given the lack of a standard, all these libraries were incompatible (besides, usually not 100% ELKS95 compliant).

And thus, the language designed for re-usability, ended up being implemented by several incompatible implementations; software developed on one of them didn't work on the others. The choice was to write software only using ELKS, and ELKS based libraries, and ignore (i.e., not reuse) anything else (which included most libraries included with compilers and most third party libraries). A lot of Eiffel software is written from scratch today, instead of reusing, so a lot of people expect a sad end of the story today. NICE hasn't done too much since then; 6 year later they released ELKS2001 which was just some minor changes on existing classes of ELKS95.

Despite some efforts to bridge this problem (most notably the GOBO project, which maintains a set of useful, compiler independent set of libraries bigger than ELKS), the problem stays today... or is just worse.

In 2003 the standardization process was put in the hands of ECMA. The standard is moving forward quickly but looks quite different from the language compiled by today compilers. In fact, a pure ECMA compliant compiler won't compile most of the code you can find today. Some backward compatibility may be added, but not all of the new features lend themselves to backward compatibility. So, if any compiler goes the ECMA way, reuse (even between versions of the new compiler) will be almost impossible (even assuming that introduced language design bugs are removed). And so, some compiler writers (like the Open source SmartEiffel) have said publicly that they're forking and won't follow ECMA. It's hard to know what ISE will say, but given that it has clients to support, I can guess it will be conservative.

To complicate things even more, in 2004, the SmartEiffel team released the 2.0 version of their compiler. It introduces a lot of new changes, restrictions, enough to make most of the code that worked with 1.1 uncompilable. 2.0 was a disaster; in 2.1 they added some things that helped compatibility (while breaking others), and for the not yet released 2.2, they are adding some compatibility fixes.

The Eiffel community is getting split. And getting tired of all the bickering and forking and incompatibilities and lousy standard. And they are stopping using Eiffel: the incompatibilities between implementations and standards hamper any attempt of reuse, and thus software quality. Eiffel was designed to achieve software quality, but the design is not being enough. So, the different players today, all of them pulling in well-intentioned different directions are killing a beatiful language.

I'm not saying "Eiffel is Dying" (you may have listened the same about Apple or BSD). I'm saying "Eiffel is being killed. Stop it."

So what do we do now?

I don't think a company (ISE or any other) will have the strength to survive the uncertainty around Eiffel today. They will be tied between the tension of keeping compatibility for their customers, and supporting the standard, and they'll lose customers both ways.

I think the only way out is a strong Open source community around Eiffel. A large united community is more or less independent of being pressed by customers, knows how to adopt good standards and reject bad ones and keeps high the availability of library goodies ("comes with batteries included", they say in the Python community). However that requires good open source tools. The obvious choice is the SmartEiffel compiler, but they keep braking backwards compatibility so the availability of libraries is decreasing instead of increasing, and a lot of developers are tired from trying to keep up.

Another Free choice is Visual Eiffel (they opensourced their command line compiler recently); but it has almost no community behind. SmartEiffel has the "monopoly" on Free compilers, in the sense that it is good enough to not have too much competition, but OTOH is not good enough to join the community together.

The only hope now, is perhaps a good open source compiler, with an open development process and with an effort for compatibility, and an effort into trying to make reuse possible for already written libraries. That could get together the existing community. Perhaps that's enough to start growing again and keep the language alive. And perhaps we will have an Eiffel implementation, a standard, reusability, and quality

I hope.


Journal Journal: Ramblings on Eiffel and Haskell

Desde que la gente de SmartEiffel se dedica a hacerle extensiones exóticas al lenguaje, en la lista de correo se han dado un par de discusiones interesantes.

A principios de febrero hubo una sobre la nueva forma de manejar tipos de constantes numéricas. En la última versión se agregaron tipos con tamaño explícito (INTEGER_8, INTEGER_16, INTEGER_32 e INTEGER_64, además del clásico INTEGER que es por definición equivalente a INTEGER_32). Y la gente de SE decidió darles a las constantes numéricas el tipo más chico en la que entren. Así, "100" tiene tipo INTEGER_8, "1000" es un INTEGER_16, y "-100000" es un INTEGER_32. Eso, entre otras cosas disparó un thread bastante extenso de como manejar esas cosas.

Alguien propuso usar un enfoque basado en Haskell, que en general hace muy bien esas cosas (me acuerdo que hubo un par de veces que no, pero no tengo el contraejemplo a mano). Yo mande un mensaje contando como era en Haskell, y que cosas le faltaban a Eiffel para poder hacer algo así (y aprovechar para decir que me gustan las funciones genéricas :) ).

La cosa es que esto disparó una discusión interesante (en privado) con Alexandre Ferrieux (uno de los usuales de la lista de SE). Es una discusión sobre lenguajes de las que me gustaría escribir y bloguear más. Cito:

From: Alexandre Ferrieux
To: dmoisset
Subject: Haskell and Eiffel
Date: Wed, 02 Feb 2005 15:48:47 +0100


I know there are millions of "A vs. B" language comparisons, but your post on the SE list shows familiarity (to say the least) with both, while many comparison-writers have a strong bias...

Would you care to give your view of Eiffel and Haskell ? For example, considering the obvious concern for cleanliness in Haskell and the current state of uncertainty and ugliness Eiffel has fallen into, why would an experienced Haskell programmer want to spend time with Eiffel ? I was bred with C like anyone else, loved Prolog and Lisp as educational toys, was about to invest time with Eiffel, and am wondering about Haskell...

Thanks in advance,


Le contesté:

From: Daniel F Moisset
To: Alexandre Ferrieux
Subject: Re: Haskell and Eiffel
Date: Wed, 02 Feb 2005 18:06:04 -0300

On Wed, 2005-02-02 at 11:48, Alexandre Ferrieux wrote:
> Hello,
> I know there are millions of "A vs. B" language comparisons, but your
> post on the SE list shows familiarity (to say the least) with both,
> while many comparison-writers have a strong bias...
> Would you care to give your view of Eiffel and Haskell ?

Disclaimer: I do not consider myself a serious Haskell programmer. I have studied it very deeply from a theoretical standpoint and from a programming language paradigms view. I haven't written large, real-life systems in Haskell, just some toy programs (some of them not-so small). OTOH I have coded a lot of Eiffel that works (and I need it to).

After that, I can give you my opinion and you can put a value on it :)

> For example, considering the obvious concern for cleanliness in
> Haskell and the current state of uncertainty and ugliness Eiffel has
> fallen into, why would an experienced Haskell programmer want to spend
> time with Eiffel ?

The cleanliness of Haskell is appealing, from a mathematical point of view. However, I have always felt some uncertainty about scaling that to bigger systems.

A lot of algorithms can be described beautifully in Haskell, in a form that is short, compact, non redundant, and clear at the same time. The same happens with a lot of data structures (but not any; the pure functional paradigm lends to some structures more than others). As I said in the mailing list, the typing system is a charm (as a programmer, and as a theoretical computer scientist). But I think it lacks several features for "programming in the large". It is hard to compare them one by one with Eiffel (given the different paradigm), but I feel Eiffel is much more of an "engineering" language instead of a "computer science" one.

Specially, is hard to profile algorithms and estimate running speed or memory usage given the "lazy evaluation" nature of Haskell (this is not a problem in some strict functional languages like Ocaml). And interaction with other systems and libraries (I/O, databases, etc.) is a little convoluted to program, even after understanding "monads", which are very useful to emulate aspects of imperative programming, but require some brain-bending to grasp.

A lot of people disagree with me and think Haskell is ready for big systems, and there is a lot of real software written in Haskell (a popular one today is the DARCS revision control system).

I think the ugliness in Eiffel is something that will go away, and more a matter of implementation. The language concept is by far the cleanest thing I've seen in imperative/OO programming, and probably will keep clean. Once ETL3 is released and ECMA sets the standard, I guess this crazy days of adding random features to Eiffel will go away. Or I hope, at least.

Some things I like in Eiffel and you wont find in Haskell (and perhaps nowhere else) are:

  • Design by Contract. I think this tool is invaluable. I even use it when programming C (with helper libraries)
  • Easy interfacing with other libraries (even external ones)
  • A language structure allowing clean, modular, systems
  • An OO typing system that does not suck. Specially, I still think Eiffel is the only language with multiple inheritance done right.

The typing system of Haskell is very clear and sound, completely static, with type inference, and with genericity everywhere (even more than Eiffel, which, anyway is the best I've seen for OO languages). But it's not OO (however, it has some kind of subtyping/inheritance that can help). The pattern matching mechanism is very useful, but there seems to be a conflict between them and modularity (you can't abstract patterns, and cannot use them without exposing your data structures), so part of the magic is lost.

OK, looking back this seems more of a pro-Eiffel mail, so your desire for an unbiased comparison may be lost :) Anyway, I think learning Haskell is an excellent educational exercise, and teachs you very different ways of programming and thinking problems.

> I was bred with C like anyone else, loved Prolog
> and Lisp as educational toys, was about to invest time with Eiffel,
> and am wondering about Haskell...

Both Eiffel and Haskell have a lot of interesting stuff in their design, that can help you even if you don't program in any of that languages afterward. I have read about Lisp (but not programmed much more than "hello world"), but it looks like a very primitive functional language compared to Haskell (like comparing assembly to C or Pascal).

I mentioned Haskell at the SE mailing list mainly because I think each language has it strengths, and the best solutions sometimes can be found mixing solutions found in different languages. And the Eiffel design could use a lot from the typing system of Haskell without conflicting its other features.

> Thanks in advance,

NP... btw, I sometimes blog this kind of stuff when I take the time to write it. Do you mind if i publish your email with my answer? (Omitting the name, if you want)

See you, Daniel

Parece que no le pareció tan pro Eiffel:

From: Alexandre Ferrieux
To: Daniel F Moisset
Subject: Re: Haskell and Eiffel
Date: Thu, 03 Feb 2005 14:13:40 +0100

Hello Daniel,

First, thanks for the precise and insightful answer. It is exactly what I needed ! And indeed, the principle of learning a language to export its spirit to another one is not alien to me: so far I've written very little in Eiffel (mainly by fear regarding the current uncertainty), but improved my C designs using those ideas...

So the immediate consequence of your message is that I'll rush to do the same with Haskell: good job for a "pro-Eiffel" view ;-)

(The idea of a 'thin imperative layer' isolating a 'functional core' from the hostile outside world is really exciting...)

> NP... btw, I sometimes blog this kind of stuff when I take the time to
> write it. Do you mind if i publish your email with my answer? (Omitting
> the name, if you want)

No problem to publish the message; and no need to hide my name.

Thanks again and best regards,


Ahora nomás me falta autoconvencerme a mi de usar haskell :)

GNU is Not Unix

Journal Journal: Success!

[Sacaron el topic "Linux" de la lista de topics. Así que ser GNU/Linux].

Ayer se hizo el 12mo installfest de GrULiC; que a la vez era el primer FLISOL .

Fue uno de los mejores hasta la fecha... por un lado, el tiempo extra que le pusimos en la difusión y organización ayudó; por otro, es el primero en que las instalaciones salen tan bien y peleando tan poco. Ni un solo caso de hardware exótico con el cual pelear.

Wireless Networking

Journal Journal: Getting wired

Para la oficina de except, tuvimos un problema de conectividad. Hacer un contrato nuevo de ADSL tiene el problema de que no se sabe que va a pasar con la limitación de ancho de banda de Telecom; y el lugar no tiene cobertura de fibra (Ni Fibertel ni Ciudad Internet).

Lo que resolvimos, es hacer un enlace WiFi hasta casa, y compartir la conexión DSL (contrato viejo) de Arnet. Para eso, compramos una placa de red para mi máquina (Una Trendnet TEW228PI), un Access Point Micronet, y muucho cable para antena y conectores.

Con esto, lo de "wireless" es medio mentira porque pagamos cerca de $200 en casi 20 metros de cable y conectores, para cubrir una distancia de ~120m. (Comparar con los $125 de la placa y los $305 del AP).

El AP anduvo enseguida, sin quejarse, y sin requerir mucha configuración. Con la placa hubo que pelear un ratito, pero poco, y salió andando. La placa tiene un chipset Realtek 8180, que tiene un driver desactualizado de Realtek, también es soportada por ndiswrapper, y tiene un driver libre. El driver cerrado para linux anda solo con algunos kernels y clavaba todo, así que lo descartamos. Ndiswrapper me requirió un update de kernel (2.4.23 a 2.4.29), pero anduvo de lujo. No he jugado mucho con el driver libre después que el otro anduvo; aparentemente soporta más features de la placa (como detectar el nivel de ruido y calidad del enlace), pero la comunicación no parecía tan buena (había mucha duplicación de paquetes).

En las dos puntas pusimos cantennas direccionales. Una con una lata de un Navarro Correas, en la terraza de casa, conectada a la placa PCI en dwarf. La otra con una lata de Terrazas de los andes, en el tanque de agua del edificio de la oficina (sobre el 2do piso), y un cable que baja hasta el AP en la cocina (estamos en el 1er piso). El AP quedo adentro para evitar riesgos de robo.

Al principio tuvimos varios dramas e irregularidades con la conexión. Probamos con muchas configuraciones, y encontramos que influía mucho en la calidad la selección del canal. Además teníamos un par de conectores mal armados; una vez bien acomodado eso (y fijo a la pared) todo anduvo bastante bien; de todos modos en algún momento hemos tenido que cambiar de vuelta de canal, y tenemos el bitrate limitado a 2Mbps (en vez de los 11 que da 802.11b); de todos modos sobra para salir por un caño de 512Kbps, y hacemos eso sin perdida de paquetes. Desde la placa veo otros 3 APs, uno de una empresa que esta justo atrás de la antena (En Colón y Fragueiro, más o menos).

Hay que ver si de paso esta cuestión les sirve a los de la freenet cordobesa. Ya anunciamos el nodo en NodeDB. Por ahora estamos trbajando sin filtro de MAC ni encripción, todo bien promiscuo. Veremos si seguimos así o cerramos un poco (filtros de MAC, WEP), según como ande la cosa.

Una foto, para el pueblo (eso es de justo antes que se fuera anthony).

Linux Business

Journal Journal: Our place

Finalmente, Except consiguió oficina. A una cuadra de mi casa, en unos departamentos que les alquilamos a mis viejos (Pirovano 297, 1ro B).

Hace un poquito más de una semana que estamos mudando cosas, y ya tenemos (precariamente) un lugar donde trabajar. Todo un laburito mudarse (y eso que es mudar una oficina, no un lugar a donde vivir). Hemos andado de compras de muebles, ferretería, plomería, vajilla, supermercado....

Pero ya es cómodo tener un lugar, y verle un poco más de realidad física a except :) Supongo que pasará un rato hasta que tengamos nuestro iconito en los topics de /.


Journal Journal: apt-get upgrade 1 1

Hace bastante comente que estaba laburando en una red de thin clients para una empresa en otra provincia. La cosa es que cada tanto hacemos mantenimiento remoto del Debian Srage que tienen instalado ahi en el servidor. La gente de allá hace un apt-get --download-only hasta tener todos los paquetes (lo que toma un rato, siendo que tienen dialup). Después me conecto por ssh y hago las cosas.

Ayer estuve con eso; andaban reclamando un uprgade para resolver algunos problemas (integración entre Evolution y CUPS, e integracion entre OpenOffice y Nautilus). Arrancamos tarde (para que no hubiera nadie trabajando en la empresa excepto el sysadmin de allá), y le dí upgrade. Después de un rato de apretar enter a los defaults de casi todas las pregutnas que hizo, quedo todo instalado.

El post fue un poco escabroso. Les pedi que reiniciaran el server; mas que nada para no reiniciarles violentamente yo el gdm y que no se asustaran, pero ademas para ver que el proceso de arranque anduviera bien (he visto upgrades que queda todo andando, excepto el rebooteo).

Un raaaato despues, veo online (en Jabber) a la gente de allá, que habia podido llegar hasta el gaim. Desde xfce, que es lo que había levantado (cuando lo por defecto era gnome). No se que cambios han hecho en debian, pero no encuentra mas de donde se cambia el defualt de eso. El "default.desktop" llama a un script, que llama a Xsession, que llama a todos los scripts de Xsession.d, y en ninguno encontre una referencia a xfce o gnome. Al final le acabe diciendo a gdm que en vez de usar default.dektop como escritorio por defecto, use gnome.desktop, y eso parcho las cosas. Por otro lado, gdm ya no hace mas eso de ofrecerte cambiar la sesion por defecto cuando elegis una distinta al arranque (que es lo primero que sugerí)

Mientras pasaba todo eso, les dije que probaran imprimir desde Evolution (que antes clavaba al evo). No se clavaba, pero el trabajo no salía de la cola. Les dije que imprimieran desde otro programa que antes anduviera, y pasaba lo mismo. Probando un poco más, vimos que las que no andaban eran las impresoras seriales. Despues de revisar que habia actualizado de cups, si habia mas updates, si no habia cambiado la configuracion (le había pedido que no), encontré en los logs que había problemas para levantar el backend "serial". Poniendo el LogLevel en "debug" vi que daba un "Permission Denied". Acabé encontrando que el ejecutable que manda las cosas al puertos serie tenía los bits de ejecución apagados.

Ya le mande el tirón de orejas al package mantainer de cupsys. Después de eso me fui, esperando que hoy no aparezca nada más roto post-upgrade...

Do not underestimate the value of print statements for debugging.