Tai's model is obviously doing well its field, it has 38 citations with the last being in 2010.
This is all true, I think. I compared the iPhone 4 side-by-side with the Galaxy-S and the latter has a much brighter, more defined screen. I don't know what about the iPhone 4 screen is supposed to be better, but it's definitely not the brightness, responsiveness or the colours. Maybe it's higher resolution or something, but frankly the resolution on these phones now is nearing the limits of what I can detect with my eyes in normal use. You basically can't see the pixels on them any more unless you press your face right up against the screen, so increased resolution is a bit irrelevant really.
I would say that I still think that the iPhone line still has the better touch response in general use, I'm not talking about the UI layout or anything, purely the way the things move about when you touch the screen. With the Galaxy-S you can sometimes see a little lag or jerkyness in the way things swish about the place, I don't ever seem to see this with the iPhone. It might have something to do with the iPhone only allowing the user to run one thing at a time, so there's never a really heavy processor load or something.
All this talk of using whatever kind of database to organize your experimental data is nuts. It's well intentioned, I'm sure, but it's still insane. I always tell students that there is no general way to organise ones data, you have to find a system that works for you. I reckon that > 99% of physical science researchers (not just physicists, as seems to be a confusion in several replies) wouldn't be able to set up and use a database in a way that's better in terms of time and effort efficiency than just doing whatever it is that they already do to organise their data. Worse still, I reckon that it's the sort of thing that one would spend a huge chunk of time doing and then only use for a short while before you got bored of inputing stuff into the data base properly and then started to forget to do it or worse, resolving to "do it in batches". Anyway, the result of this will be that one eventually stops using the database and goes back to what one was doing before, but now with a huge hole in the data-trail where the database used to be. Alternatively, one will struggle on with the database for a while and then try and re-design it to put in all the features that were missed out in the original design, all the while sucking loads of time and eventually going back to the original method.
Getting on to some actuall advice. I would suggest two features that your chosen system should have. It should be:
Personally, I have a lab book in which I record experimental details (what I actually did to generate the data). There's a date at the start of every day and the rough "titles" of the experiments and then the details of what it was. When I generate data files I organise them into hierarchy of directories. So, there's a "projects" directory that has all the different projects in it. there might be a project that I'm working on to do with nanoparticles or something. and that'll have a directory called "nanoparticles". There's a bunch of directories in the project folder, such as "data", "analyses", "reports", etc. The Data directory is divided by the experimental technique that was used to get the data, such as "fluorescenceMicroscopy", "TEM" or "SEM" or whatever. Then the actual experiments are in directories inside the relevant technique. I name the directories by date first and then a brief indicator of what the experiment is about. So it I was looking for the aggregation behaviour of my nanoparticles in the presence of different polymers or something, I'd have a directory called something like
or something like that. Something that people often do is call a directory "20100815" of something like that. I used to do this, but I didn't find it useful to look back on after 6 months. People also forget that you can have something like 256 characters for the file/directory name - USE THEM! Inside this directory will be a bunch of data files that I acquire. I tend to start the naming of files with a number and then a decription of the sample and what's the point of this data. So, for example the first image in a set of TEM images might be named "01-100mM_PEG_generalGrid_300x", the next will be named "02-...", "03-...", etc. This way, all the files are ordered in the order that they were acquired, which I find helps to find them later, since I think that you remember the order of things better than their absolute position. So, if I wanted to find an image of aggregated nanoparticles that I took some time in the winter, I could easily find "Projects/nanopartocles/Data/TEM/20091106-FeNP_with_20k_PEG_in_EtOH/03-10mM_PEG_aggregatedParticles_20kx".
Anyway, this works for me, but my data needs to be organised so that I can get to the relevant data and then do something with it, not aggregate large amounts together and get some numbers.