Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?

Comment: Re:The "edge" of the universe? (Score 1) 64

Imagine a hot Universe at an early time (which may be very large, even infinite). Photons are suddenly released and go in all directions.
The Universe expands (meaning the distance between everything increases). The photons are still traveling through the Universe.
At any point in time you can observe photons arriving at your position, and they are as old as their origin is away in light-distance (well, space expanded in the meantime, that makes it a bit harder to imagine).
So, you can observe the background at any time, from all directions. It gives you access to the space where the photons where released, called a "last scattering surface". As time goes by, the photons have to be older and their distance larger, to arrive now.

Comment: Re:Pretty sure the heat death of the universe will (Score 3, Informative) 386

I predict that as heat-death approaches, time will slow down, and by the point of heat death, time will be at a complete standstill, much like approaching the event horizon of a black hole, so from its own frame of reference, the universe will actually seem to last forever.

Sounds like you don't understand time dilation. When you approach a black hole, time does not go at a different rate for you. It does however go at a different speed compared to an observer at a large distance (that's why time is *relative*). For them, all objects falling towards a black hole actually seem to pile up near the event horizon (but gravitational redshift and time dilation make the radiation gradually unobservable). For the person falling in, nothing changes, they just fall through.

To summarize, time never slows down, it may only slow down in one place compared to another place. You did not specify two places so your statement does not make any sense.

Comment: Re:See it before (Score 4, Interesting) 276

by buchner.johannes (#49665045) Attached to: Ask Slashdot: What's the Future of Desktop Applications?

Problem 1)
Open-source desktop applications have is that the feedback loop takes forever. It is difficult to edit a GUI or modify a behaviour immediately. One has to find the (current) code base, compile, make sure one has the right libraries (which may be different to the system versions) and make a local installation.

I would like to see a program/framework/DE/whatever where you can, while you are in an interface, click "edit code" and modify the program on the fly. Sugar/OLPC began implemeonting such functionality for their Python programs. This would drastically speed up make scratching your itches much easier, as well as redistributing your modifications.

All progress comes from having fast feedback loops. Make it easy for users to play around (and exchange modifications).

Problem 2)
Another change I would like to see in Desktop Applications is that one does not have to program any UI logic (creating widgets, connecting events) at all, it just seems to be redundant. Why do we design a UI by writing *text* in 2015?
It should be possible to auto-generate a UI from the type of objects one wants to modify, from the constraints of the best practices in UI design, perhaps with a workflow definition. It's useless to have all this freedom when we always want it the same way (text boxes for text input, checkboxes for booleans, list for lists, buttons for actions) anyways. Why hasn't a library come along that does that. At least glade lets one draw UIs, producing a XML file that can then be loaded and populated by events. More work on making programming UIs trivial please.

Problem 3)
Deployment. It's ridiculous. Today we can easily install python/ruby libraries from git repos, but not programs that will run in user-space?
In fact, perhaps the whole packaging of Linux systems should be different. What if every user was running in a virtual environment where they can install any software they want, with the other users being isolated from those changes. In the days of Docker and KVM that should be quite possible.

Comment: The question is (Score 1) 416

by buchner.johannes (#49615961) Attached to: No, NASA Did Not Accidentally Invent Warp Drive

If all goes through, what will it mean?
If I understood correctly, it allows you to pre-warp some space ahead in your journey, so that you can begin your journey later. For example, to go to Alpha Centauri A, where light takes a few years, you may start the warp drive, wait for a year, then jump into the ship and travel there (taking 1 year less time).

It will not save you anything going to new places you did not plot a course to.

I am also not sure about the speed limits that warp drive imposes. Possibly beyond light speed if it squeezes space enough? (By light speed I mean compared to flat space).

Comment: Re:But why is there only one spot like this? (Score 1) 45

by buchner.johannes (#49550567) Attached to: Mystery of the Coldest Spot In the CMB Solved

You make it sound like the temperature of the (empty) region averages down the background, making it colder. But something way more awesome actually happens: Photons enter one side of the Void (empty region) at an early time and travel through it. During that time, the Void expands. To escape the Void, the photon then has to lose more energy than it received when it entered. It is the slow light speed relative to these enormous scale, evolving structures that causes this effect!

Comment: Re:systemd is a bad joke (Score 4, Interesting) 494

by buchner.johannes (#49545361) Attached to: Ubuntu 15.04 Released, First Version To Feature systemd

You can't just leave things alone, because computers have also changed. Today we do not work on mainframes or desktop computers, but increasingly on laptops and mobile phones, which constantly change state, in terms of network connections, devices plugged in, location, hibernation.

I think there is consensus that these things did not work well on the old init system, although band-aids were found. I remember that changing the hostname stopped X from working, which can occur when DHCP gives you a new hostname. That is 80s design for you. Or changing the time messes up the logfiles.

Now you can choose which modern init system you want, and there are a couple out there: OpenRC, upstart and systemd are the most well-known ones.

OpenRC is the familiar runlevel based approach, which runs scripts which may or may not succeed.

Upstart is a triggering framework, that takes pre-defined actions (but does not work with goals). That means you have to write tasks for how to get from A to B with your system.

systemd is a dependency resolution program, that knows what to activate next to get to a certain state (goal). It handles services, mount points and network connections in the same framework. It is essentially an overseer of a services tree.

There are some upsides to systemd, besides parallelizing the tasks of a dependency tree to reach a goal. One is for every process it is known which service launched it (there are some Linux-specifics that allow marking those processes). Also, each service can be assigned resources (memory, number of processes), which it can not exceed (again, modern Linux supports that). And, obviously, you are not limited to a set number of runlevels.

Yes, systemd is annoying, because it is a new thing to learn. And it is annoying, because the maintainers are inconsiderate. But in the end, it is just a program to start other programs, with one particular way to do it. I don't get what the big deal is. If it is feature bloat -- Linux also has a lot of features, so does VLC -- there we consider them a good thing. Technically, the dependency resolution approach of systemd seems like a good thing (as in progress for Linux) to me.

If you want to put yourself on the map, publish your own map.