Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Arm the general population (Score 1) 1633

This is all about government being able to expand its power and make sure that the citizens lives are controlled by government rather than vice versa, by people who are obsessed with power and basically want to be dictators, to have unlimited power over peoples lives. They want a defenseless population and this leads to dictatorships.

I agree, that the general population in the country should be provided arms, as it is done in Switzerland.

Comment Better documentation of source code is needed (Score 2) 582

I do believe open source is safer as it does absolutely allow for independant party review, which is how this bug was found. Because outside parties had access to OpenSSL they were able to find the problem, whereas with closed source software it might have never been found, or found but hushed up by the company. Proprietary software has just as many bugs as open source, if not more, the difference is there is less accountability.

That being said, the full potential of open source software in independant party review is not brought to its full potential but the fact that a lot of open source software is poorly documented as to the internal construction of the code. This ends up wasting time for programmers to basically have to spend more time than it should to learn the internals, and even wastes time of those running the project basically repeating explanations of the code whereas if they were to make some documentation people could get many more answers without having to bother the project leads. It makes the learning curve much steeper that when dealing with software that has a lot of code, to not have any documentation on how that code fits together. On one hand, we say that open source allows people to review the code, but just opening the source alone does not make it easy as possible for this to happen, the code needs internals documentation or else it often will take simply too much time for people on the outside for people to penetrate it. Many open source software projects end up with a cliche who understands the internals of the software because they wrote it, but its difficult for those on the outside to penetrate. Even for an expert programmer, being able to access documentation speeds up the time to become familiar with the code immensely.

Not doing code documentation is a poor practice and open source developers should document what they are doing for others and as well to save time by preventing having to explain things over and over again to newcomers.

Comment Re:Yet again C bites us in the ass (Score 2) 303

Its probably possible to create a compiler mode that will compile bounds checking code into existing C programs. This would involve one compiler pass that would generate C output with the inserted code, and a second pass to generate the binary. This could be done with a new backend in Clang. It would also allow the inserted code to be easily seen since the source output could be dumped into a file. A good thing about this is that such a feature could be turned on or off in the compiler. This would be on by default in nearly all programs. Doing this is much more efficient than rewriting a lot of software in other languages.

Comment Re:informal poll (Score 1) 641

A factor in Windows dominance is the fact most computers come with Windows and people just stay with that by default. Engaging computer manufacturers to actually look at Linux as an alternative would probably help. This brings a second factor in that Linux is often very antipathetic to binary drivers even though such drivers would likely accelerate open source driver development as it would allow for back engineering to be made easier. Such binary drivers could provide support for hardware quickly and in a timely way until open source drivers become available. I think that Linux should provide a driver compatability layer for binary drivers. This would not impact open source drivers. Open source drivers could still be built for a particular kernel version. If someone buys a USB camera they just want to be able to plug it in and for it to work, not worry about if it will run on Linux. Thirdly is a lack of applications, but this sort of is the result of lack of users, due to the previously mentioned deficiencies.

Another recent problem with Linux is the extremely poor user interface introduced by Gnome 3 and Unity which are as bad as Windows 8 and actually seem to harm the opportunity for Linux to be able to take market share from Windows by staying with the traditional taskbar, desktop, start menu model which really is best for most desktop users. Though, on Linux there are alternatives that provide the traditional model.

I think that it would be nice of someone were to fund some sort of open source project to really get Wine to better than 99% compatability with all windows applications and for a project to be started to build a driver compatability layer that would allow Windows drivers to work on Linux. I think that would really move Linux to being a real alternative to Windows to the point where computer manufacturers could actually just start installing Linux by default. Perhaps computer manufacturers ought to fund this work as well.

Comment Re:FTP? (Score 1) 161

When you are trying to download a file over FTP and you lose the connection 20% through a 10 GB file, FTP doesnt look too good at all. Its good to have a "guaranteed delivery" solution that will restart the file transfers which have been running even if the computer is rebooted, right from where it left off. This is like what Websphere MQ does. Even Rsync sort of sucks, if the process is interrupted, it has no idea where it stopped so it starts the whole process of scanning directories looking for updated files from the beginning. Just scanning directories and files looking for updates is a little archaic as the filesystem itself should provide an update log to show exactly which files have changed since the last backup eliminating the need for a filesystem wide scan.

Comment DVDs are better for many people (Score 2) 490

DVDs are not less convenient. The picture quality is much worse with streaming, plus, many people do not have the fast internet connection needed for streaming. Its amazing how the article assumes that everyone has high speed internet. With some ISPs also enforcing download caps, this adds more trouble. Break-up of the picture is common, and if more than a few of your neighbors are trying to watch movies too, it just stops working. The following not relate to rentals, but you can have a used market for selling DVDs and Blu-ray disc while its very difficult for a used market to exist with streaming or files. Streaming doesnt give you your own copy of the movie at all. At least with a DVD i can pop it in at any time without having to pay a fee and get good picture quality. Blu-ray can provide picture quality that far surpasses what streaming can do with the internet connections most people have. Blu-ray has a HD picture which is always clean and beautiful, streaming for me almost always is filled with glitches, artifacts and poor picture quality.

Comment Display server does matter (Score 4, Interesting) 202

Obviously, display server does matter to users. If users cannot use a whole set of applications because they are not compatable with Distro Xs display server, that is a problem for users. This can be addressed by distros standardizing around display servers that uses the same protocol. Its also possible, but more complex, is if distros using different display protocols support each others display protocols by running a copy of a rootless display server that supports the others display protocol. Relying on widget sets to support all display protocols is too unreliable as we are bound to end up with widget sets which do not support some display protocols. Needless to say, it is best to have a single standard, it would have been easiest and best if Canonical had gone with Wayland and actually worked with Wayland to address whatever needs they had.

Its also true a new display protocol wasnt really necessary. The issue with X was the lack of vertical syncronisation. X already has DRI, Xrender, Xcomposite, MIT SHM, and so on for other purposes. An X extension could have been created to allow a way for an application to get the timing of the display, the milliseconds between refreshes, the time of the next refresh, etc.. X applications could then use this timing information, starting its graphics operations just after the last refresh, X applications could then use an X command to place its finished graphics pixmap for a window into a "current completed buffer" for the window, allowing for double buffering to be used. This could be either a command to provide the memory address, or a shared memory location where the address would be placed. All of the current completed buffers for all windows are then composited in the server to generate the master video buffer for drawing to screen. There is a critical section during which the assembly of the master video buffer would occur, any current completed buffer swap by an application during that time by an application would have to be deferred for the next refresh cycle. A new XSetCompletedBuffer could be created which would provide a pointer to a pixmap, this is somewhat similar to XPutPixmap or setting the background of an X Window, but provided that XPutPixmap might do a memory copy it may not be appropriate, since the point is to provide a pointer to the pixmap that the X server would use in the next screen redraw. Said pixmaps would be used as drawables for opengl operations, traditional X primatives, and such. This scheme would work with all of the existing X drawing methods. the pixmaps are of course transferred using MIT SHM, its also possible to use GLX to do rendering server side, for use of x clients over the network, GLX is preferable, otherwise the entire pixmap for the window would have to be sent over the network. The GLX implementation already allows GL graphics to be rendered into a shared memory pixmap. Currently however, some drivers do not support GL rendering into a pixmap, only a pbuffer, which is not available in client memory at all, however, the DRI/GEM stuff is supposed to fix this and the X server should be updated to support GLX drawing to a pixmap with all such DRI drivers.

Another issue is window position and visibility in how it relates to vertical synchronization. Simplistically the refresh cycle can be broken into an application render period and a master render period. If the X server has a whole pixmap buffer of a window, it grabs at a snapshot of the display window visibility/position state the beginning of the master rendering period and uses that to generate the final master pixmap by copying visible regions of windows into the master buffer.

It can be a good idea to allow the option for applications to only render areas of their windows that are visible, this saves on CPU resources and also avoid needless rasterization of offscreen vector data. In order to do this, applications would need to access visibility data at the beginning of the application render period. Applications would then have to, instead of providing a single pixmap region for the entire window, would instead provide memory addresses for pixmaps of each visible rectangle of the window it has rendered (could be in the same re-used mmaped area), with vector coordinates. A snapshot of the window position state would need to be taken at the beginning of the application render period for use by apps and used in the master render period as well. This could introduce a delay between a window visibility/position change appearing on screen that is longer than the former method.

Comment Re:So why did Apple and Google toss it? (Score 2) 202

Its the not made here syndrome, plus the fact that Google and Apple want to create a fleet of applications that are totally incompatable with other platforms in order to create user lock in to their respective platforms. Obviously, business and political reasons and nothing to do with technical issues. X would have been a fine display platform for either but, then the platforms would be compatable with mainstream Linux distros and you would have portable applications so your users wouldnt be locked into your OS.

Comment Re:Shh... (Score 4, Informative) 202

This is all wrong. X has something called GLX which allows you to do hardware accelerated OpenGL graphics. GLX allows OpenGL commands to be sent over the X protocol connection. X protocol is sent over Unix Domain Sockets when both client and server are on the same system, this uses shared memory so it is very fast, there is no latency of network transparency when X is used locally in this manner. MIT SHM also supports forms of shared memory for transmission of image data. Only when Applications when they are being used over a network, do they need to fall back to send data over TCP/IP. Given this, the benefits of having network transparency are many, but there is no downside because where an application is run locally, it can use the Unix domain sockets, the MIT SHM and DRI.

X has also had DRI for years which has allowed an X application direct access to video hardware.

As for support for traditional X graphics primatives, these have no negative impact on the performance of applications which do not use them and use a GLX or DRI channel instead. Its not as if hardware accelerated DRI commands have to pass through XDrawCircle, so the existance of XDrawCircle does not impact a DRI operation in any significant way. The amount of memory that this code consumes is insignificant, especially when compared to the amount used by Firefox. Maybe back in 1984 a few kilobytes was a lot of RAM, that is when many of these misconceptions started, but the fact is, these issues were generally found with any GUI that would run on 1980s hardware. People are just mindlessly repeating some myth started in the 1980s which has little relevance today. Today, X uses far less memory than Windows 8 does and the traditional graphics commands consume an insignificant amount that is not worth being worried about, and which is needed to support the multitude of X applications that still use them.

Comment Recency bias and global warming pause (Score 4, Interesting) 703

Much of the global warming skepticism has been fueled lately by the decade long pause in the global warming average. It seems what I can gather from this is while many areas are hotter than they were previously, other places are somewhat cooler, so it balances out.

Some of the skepticism does exhibit a recency bias, by simply ignoring everything prior to year 2000 or so. In a chart of temperatures during the past 100 years, the current pause does look rather insignificant and could be simply a temporary pause rather than a change in the trajectory. They have problems explaining away the previous 50 years of temperature increase.

 

Slashdot Top Deals

A rock store eventually closed down; they were taking too much for granite.

Working...