Become a fan of Slashdot on Facebook


Forgot your password?

Comment Re:No surprise. (Score 1) 240

The Nokia N900's OS, Maemo 5, is based on Debian and uses apt as its package manager. You can add and related repositories right in the application manager UI (which is just a fancy interface for apt). The OS updates can even be done with apt-get dist-upgrade. You can even install a chrooted real Debian environment and pull random packages from the Debian repositories. And yes, pulling stuff off of apt-get is beautiful.

Comment Re:Those aren't really "automatic" (Score 1) 631

I didn't write the original post. I chimed in to say that you don't actually have to specify the threads and details of communication with GHC's parallelization extensions.

There is no general purpose programming environment that I know of that completely automates parallel execution. Also, with ANY programming environment, you can make a very inefficient program if you do things unwisely. For that matter, some programming environments or styles are very different and it takes rewriting to adapt existing code properly.

It's all about how easy and effectively the tools make it to do what you want. I think that GHC is among the leaders in concepts for parallel programming. The actual runtime performance is pretty good too, especially if you are willing to put some work into it.

With DPH, I am under the impression (I've used it very little so far, so maybe I'm wrong) that you use the parallel arrays pervasively within the program (any place a strict finite list would do) and the compiler will do most of the work in determining when parallel execution is worth it. Also, Erlang's recommend style of many small processes scales very easily; the runtime will decide when to call through and when to allocate those processes to separate threads. These aren't quite automatic, but it's a LOT closer than the traditional thread & semaphore model.

Comment Re:Um, what? (Score 1) 631

I've programmed quite a bit in Haskell. GHC is the most popular compiler by far and it implements all the latest parallelization and concurrency extensions (The last standard, Haskell 98, doesn't specify anything in this area).

There are two approaches to using multiple cores in GHC:

Concurrency; which ranges from explicitly created threads (either OS threads or lightweight runtime scheduled threads or some combination) that communicate through channels or locked variables or another traditional method, to STM.

The other is parallelism. Whereas the concurrency methods all use monads to sequence and control operations and produce code that looks more like that of an imperative language, parallelism is done entirely in pure functional code. The short description of pure functional is that all data is immutable. This is very useful for parallel execution because it greatly simplifies evaluation dependencies. You don't have to worry about modifying things in the right order or change conflicts because nothing is allowed to be mutated in the first place. This enables non-strict evaluation, which means that the various values in a program (even those nested in data structures) can be evaluated at any time during program execution, and in any order (as long as they are evaluated by the time they're needed). Parallel approaches include:
  • Simple use of par and seq . seq ties the evaluation of one term to another, to force a term to be evaluated immediately even if it isn't strictly needed yet. par creates a "spark" to evaluate a value. This spark may be executed by a different runtime thread than is currently running. Together, you can specify one value to be evaluated locally and another to be potentially evaluated by another CPU. This will work well if the values are reasonably expensive to evaluate (otherwise the overhead of creating the spark, while small, will be greater than the benefits) and independent. Can easily be used with e.g. evaluating all the elements in a list in parallel; runtime threads will pick up and execute the sparks as they are created.
  • Parallel strategies. Create an evaluation strategy that mirrors the layout of your program, identifying the parts that can benefit from executing in parallel.
  • Data parallel Haskell is an upcoming method that allows you to define parallel array structures that the compiler can see through to determine vectorized evaluation strategies.

In short, none of Haskell's methods of parallelization require you to be aware of threads or synchronization.

Comment Re:Eh wouldn't surprise me... (Score 1) 451

Windows NT 3.1 actually introduced Win32 first in 1993, with limited user accounts, profiles and the registry (which was the standard location for configuration). The documentation for Windows 95 marked a lot of Win32 APIs as not available, but it still specified that all application configuration should go into separate user and machine registry hives depending on the scope of the setting. Microsoft published guidelines and made them a requirement for getting the Windows 95 logo, but Microsoft has never had the power to force ISVs to do things a certain way. One choice that didn't help was that Win95 didn't implement any security to keep the OS small and simple, which hid a lot of application design problems down the road.

There are a lot of programs that have problems on newer Windows OSes that were written for earlier versions, but examples of applications that have problems even though they followed the guidelines for the OS version they were written for are extremely rare.

Comment Re:So (Score 4, Informative) 334

I once went looking to see if there was a way to do it from within the application code itself - something like mlock()/mlockall() in posix - and I couldn't find an equivalent, which may just be a reflection of my own inexperience with the Windows API but I figured I would throw that out there anyway.

The function you're looking for is VirtualLock. You may also look into increasing the process's minimum working set with SetProcessWorkingSetSize. This requires SeIncreaseBasePriorityPrivilege.

A process that is scanning through a file is supposed to use the FILE_FLAG_SEQUENTIAL_SCAN hint so that the cached pages are recycled first, but that doesn't always happen. It also doesn't help that csrss will ask the kernel to minimize a process's working set when its main window is minimized.

Comment Re:When do people get this (Score 1) 613

I don't know that it's documented in detail anywhere. The beginning priority seems to be based on the process's priority (5 for normal apparently) and is adjusted by usage heuristics and superfetch. There is a overview here. It may help to raise the priority of FF to AboveNormal if it seems like its pages are being discarded unnecessarily.

What the other posters said in reply to your other post about the OS not really knowing what memory belongs to what tab, instead having a page level view of things, is correct. When the CPU accesses a page, it sets a flag in the page descriptor that the page was accessed. The memory manager checks these flags periodically to see what pages are being used. When the MM thinks the process has too many pages, it takes away those that haven't had that flag set in a while. I guess the frequency of usage has some effect on the priority, but I'm not sure.

Comment Re:When do people get this (Score 2, Informative) 613

But "page out" means something in RAM is going to disk - if I ever want it back in RAM, I'll have to wait.

On Windows it doesn't necessarily mean that. Writing a page to disk != needing to read it back from disk later.

Each process has a working set. Pages in the working set are mapped actively into the process's VM with page tables. The memory manager aggressively trims these pages from the working set and puts them into standby memory. A page in standby is not mapped for reading (and more importantly for writing) anywhere in the system. Part of putting the page into standby involves writing a copy to disk. This will show up as a page written.

From standby, the page can be used one of two ways:

  1. Transitioned back. If one of the processes that originally had the page mapped touches the page, it will cause a soft page fault in which the page is simply put back in the process's page directory. There's no need to retrieve it from disk since it still has the same data from before. The disk copy is discarded. This will show up as a transition fault in the performance monitor.
  2. Reused for something else. Standby pages are counted as "Available" because they can be immediately re-used for another purpose without accessing the disk. The memory copy of the page is discarded and the page is re-used for something else. No disk activity is needed at this time since there is already a copy on disk. When one of the original owners of the page want the data back and the page is no longer on standby, it has to be retrieved from disk. This will count as a page fault in the performance monitor.

The nice thing about this model is that disk activity isn't needed to either reuse pages or bring them back at the time of the demand. It helps avoid the ugly condition of paging one process out while paging another in at the same time, causing disk thrashing.

Since Vista, the memory manager will preemptively re load pages that have been bumped out of standby back into standby if there is free unused memory available. Also since Vista, each page of memory has a priority from 0-7 that determines which pages are preferred to keep in RAM. In all versions of NT based Windows, memory mapping is very similar to page file management and will use many of the same counters (including standby memory, transition and hard faults, pages in/out). Memory mapping is used by lots of components internally and for loading executable images and libraries. Also, file caching is logically based in many ways on memory mapping, although the counters are different in many cases.

Comment Re:mnb Re:Same thing happened to me this weekend (Score 1) 308

The OS is Maemo 5 "Fremantle", which is based on Debian (and BusyBox), but some of the ways it's set up aren't fully compatible with a lot of Debian standard software. I don't think you can just add the Debian ARM repositories directly and install stuff. Packages have to be tested and sometimes modified to work natively.

However, it is popular to create a real Debian environment with chroot, which works around that problem. See Easy Debian, which is a package that does all the work in setting that up, including, GIMP, LXDE and an environment you really can apt-get install most anything from the Debian mainline into.

I've had a N900 since December. I'm very happy with it. Installed Easy Debian and OpenOffice and they work quite well. There's only so much word processing I want to do on a platform that size, but it's great for modifying office email attachments on occasion. Having a spreadsheet in my pocket is quite handy too. Stylus is recommended but not compulsory. It's still in testing and there are a few headaches, like some dialogs being too tall to properly reach the buttons at the bottom, but it's already improved a lot from previous versions and I expect it to get even better.

As for the community, the main forums don't look dead to me. Have a look at the packages they offer.

Comment Re:Be Bold (Score 1) 632

I don't know why this reply is labeled redundant except to show a bias against stating a legitimate concerns and problems with Wikipedia. It sounds like there is a broken mod system here on /. as well. Not that this is also stating the obvious.

It could be that the post just before that one by Nwallins (1059978) on 2009.11.25 13:41 (#30228784) makes the same mention of I wouldn't have marked either as redundant though because the other was first and the second one has more material.

I also wouldn't say that /.'s moderation system was too far gone or biased on this topic. This story seems to have a higher proportion of 4 and 5 rated comments than average, if anything. Most of them are critical of Wikipedia's editorial realities. FWIW, I always browse interesting threads at -1 to look for good stuff to mod up and very rarely use negative mods.

Comment Re:Hope/Change? (Score 1) 670

A large government has far more opportunities for graft and corruption than a small one.

That is a good point, but as I see it, having a smaller government would mean fewer congressmen to bribe to reach a majority vote, allowing the company to expand its influence for the same budget. If you meant "small government" figuratively, you are right; a large government tends to have far more public projects to try to push through congress. However, without these, they would then focus their attack on public policy, resulting in additional lobbying against regulation and anything else detrimental to business yet necessary to ensure the rights and safety of the people.

Regardless of the literal and figurative size of the government, however, lobbyists had, have, and will continue to have more influence than entire political parties IMO.

Slashdot Top Deals

If at first you don't succeed, you are running about average.