## Journal Journal: viXra

I submitted again at viXra. http://vixra.org/abs/1303.0220

Check out the new **SourceForge HTML5 internet speed test!** No Flash necessary and runs on all devices.
×

I submitted again at viXra. http://vixra.org/abs/1303.0220

http://vixra.org/abs/1301.0115 Neglect of Metaphysics

--

Michael J. Burns

This film and slide scanner beats out the device I bought by Wolverine Tech that snaps a digital picture of my negatives and slides. The Canoscan FS 2710 has no vignetting and no ill considered custom color curve that must be undone in post processing.

It is able to capture from warped slides. No holder is require for slides; just insert them individually for scanning. The scanner is supported by my Linux computer with no added software. I added the required SCSI PCI internal card and cable to my desktop computer, again with no added software.

But still, it is a challenge to do the post processing needed to get good color from old slides with any scanner. I suspect that no scanner has the automation to do this well.

I scan from within Gimp software with the Xsane plugin, turning off and resetting all of the auto levels, using the color and full color range settings, and then making for each slide a custom four point color curve for each color - red, green and blue. The green curve stays closer to the diagonal; I pin the middle third of the green curve exactly to the diagonal. The middle thirds of the red and blue curves are adjusted down from the diagonal line to eliminate a magenta tint as a next to last step.

But first for each color I pin the middle third of the curve to thediagonal, and then move the lower left point along the bottom into the base of the histogram, green adjusted the least and blue the most as required by the faded slide. This makes for for realistic dark colors.

Next I move the top right points along the top closer to the right edge of the histogram in order to make accurate bright colors, again green adjusted the least and blue the most.

Now I move down the middle thirds of the red and blue curves to fix the faded magenta look. The curves still run parallel to the diagonal line, and blue is adjusted down more than red. You can add extra tilt to this part of the blue curve if the darker medium colors are still bluish and the lighter warm tinted.

Last I might make four points for the value curve in order to adjust the tilt of the middle. I would reduce the slope in the middle section to reduce harsh contrast to a realistic level, and then lower that middle section downward to compensate for the overall fading.

--

Michael J. Burns

I posted two articles at the viXra archive for physics outsiders. Have a look if you will. I try to post first drafts of fresh thinking here, though. And look for my book "Hacking Physics".

http://vixra.org/abs/1211.0139 is my article "Neglect of General Covariance".

http://vixra.org/abs/1212.0094 is "Draw the metric!".

--

Michael J. Burns

Think about it in a careful way. Visualize a spacelike line, then extend it as a geodesic. Using a proper diagram of the universe, it is easy to show that this eventually intersects the big bang horizon. Don't be confused by convention; the Friedmann coordinates are very deceptive.

So shards of the big bang, expanding at light speed from a universal starting point in the past, are essentially contemporaneous with us.

Of course the point origin, which ought to be apparent to anyone with the use of proper inertial coordinates, implies the overall masslessness of the universe. This makes the age of the universe compatible with the value of the Hubble constant, a simple coasting outwards. And the Einstein equation (in the correct tensor rank) does allow for a negative mass background to counter the mass and energy of the material universe.

But what then, if the universe does have net mass and has decelerated, you protest. Well, then there is the boggling thought - that, out at the contemporaneous shards of the big bang, matter is continuing to enter the universe.

--

Michael J. Burns

I read his essays on this point but did not understand until this year. He knew that natural selection on groups was a truism. And so he did not even argue at length for it, only focusing on the selection that acts on various sizes of groups, as opposed to accidents of history.

Interactions between organisms that are not zero sum drive the practical importance of natural selection on groups. Species are maintained with a conserved chromosome map and sexual reproduction due in part to the selective incentives for sharing immunological genes.

--

Michael J. Burns

My Samsung ST65 camera actually has a contrast control, and I have now realized that the proper setting for it is not zero. The default setting enhances contrast in the shadows and compresses the highlights. A -2 setting for contrast balanced by a +2 setting for saturation improves the appearance of the gray scale on a picture of the Color Checker 24 card. This might not be the final word - I will look at -1, +1 as well; but I have no desire to do post processing on photographs from this camera.

I have been trying to calibrate a Wolverine Data F2D14 film to digital converter. I now find that the color curves for this product are customized in a woefully unfaithful style. There is a big deficit of green in the lower midtones, and a huge deficit of red there. And there is a general surplus of blue from near black to near white. It is barely within the capabilities of the RawTherapee software to compensate for this using custom color curves.

My Canon EOS Digital Rebel, the 2004 model, works very well with the UFRaw software for post processing. The free software color matrix borrowed from the work of David Coffin makes pleasing color even compared to the ICC calibrations from Canon. But there is no natural calibration for the exposure setting and white balance. For a Color Checker 24 photographed against a dark background, the exposure setting in UFRaw should be -1.36, the temperature setting should be 5660, and the green balance

--

Michael J. Burns

For film cameras using the same film, the bit rate proportionality simplifies to the product of the aperture area and the frame area. Slower film has the higher bit rate. For digital cameras that use the same technology, the proportionality is to the product of the aperture area and the number of pixels. But smaller sensor areas can force you to take multiple exposures in order to complete the picture with the required signal to noise ratio.

--

Michael J. Burns

Study of natural selection on groups is the most thorough rejoinder to to the libertarian dogma. Natural selection on groups is actually a mathematical truism. And it has a proportionate practical effect whenever transactions between individuals are not zero sum. The prominent effect is to promote specialization of individuals. Witness all of the specialties in the groups called ecosystems. The deprecation of groups by promotion of the concept of individuals is mathematically only a false dichotomy. Mathematical simulations have suppressed evidence of groups by excluding non zero sum transactions, individual specialization and extreme circumstances.

Denying natural selection on groups is logically equivalent to asserting that, on average, contracts and alliances are not worthwhile, and should often be opportunistically abandoned. This is a latent contradiction in the libertarian view.

Conservative and even liberal systems of morality are often eager to control or ignore individual diversity, under the sway of patriarchal, military, or priestly and academic hierarchies. But there are natural dynamics neglected by this attempt at control.

--

Michael J. Burns

I am very interested in the question of what kind of digital camera can match the quality of a 35mm film camera. In the context of the new high definition digital post processing, how many pictures with a standard information content can I take with a camera in one second of open shutter?

In the end, I compute that my (consumer grade) Samsung ST65 has 2.6% of the performance of my Yashica. So what I ought to do in practice is to load the Yashica with slow film, attach the telephoto lens extender for most pictures, use a tripod and cable release, focus carefully, and use the best nondigital processing for prints.

The answer can only be had, I think, by comparing the shutter speeds of different digital sensors and frames of film on the same picture, using the same lens aperture area, at the same pixel count and signal to noise ratio. (The faster shutter speeds are better even for pictures with no motion, because then more total information can be fed to digital post processing from the same use of the camera. You can take the picture twice and combine the information in post processing when the shutter speed is doubled. Ignore this shutter speed rating if you will, but then there are no quality differences left between cameras in the modern context.)

Cameras not matching this test design, but having similar technology to what is tested, should then be expected to perform more or less well in proportion to the area of their apertures, and in proportion to their pixel counts. The pixel count applies again, as a reduction in exposure time due to the lower magnification that is needed for the picture.

The signal to noise ratios account twice as a factor for rating cameras; it takes post processing using four photographs to double the ratio. These ratios can be extrapolated as proportional to the linear size of a pixel for the different sensors. And films have a constant signal to noise ratio, but it is said to be twice as good for print films compared to slide films.

Larger total areas for the sensor format count inversely, because the exposure is slowed.

The effective sharpness of the lens aperture seems to not vary intrinsically with the size of the film frame or sensor.

Other references do not focus on the shutter speed comparison. They neglect the aperture area and magnification, and the signal to noise ratio is only accounted once as a factor. But all of this yields a systematic bias.

Refer to this authority:

http://www.clarkvision.com/articles/digital.signal.to.noise/

This page compares ISO 50 Velvia slide film to the digital sensor in the Canon 1D II camera. For the same signal to noise ratio as the film achieves, it rates the Canon sensor at ISO 1220. In shadows, the Velvia only rates at ISO 15. The advantage for the Canon is 24.4.

For the next factor, look at:

http://en.wikipedia.org/wiki/Velvia

This page implies an equivalent pixel size of 4.7 microns for the Velvia film, 160 lines per millimeter at 50% of full contrast. The Canon sensor has pixels that measure 8.2 microns. The Canon yields 8.2 million pixels with its APS-H format, and the Velvia film gives the equivalent of 39 million in the 35mm size. This fact applied twice, the rating here for the Canon is 0.0144.

http://en.wikipedia.org/wiki/Image_sensor_format

The APS-H format is 1.29 times smaller than the 35mm size, 1.66 in area. For the same aperture area and the same picture, the shutter speed is faster by this factor.

Since the signal to noise ratio is the same for the two cameras, we multiply the three factors to get the advantage for the Canon digital camera in the shutter speed for equivalent information with the same aperture area. This is a factor of 1.02! Compared to print film, the factor is 0.25, because the film's signal to noise ratio is doubled. Digital cameras compete better in the shadows of a picture.

http://www.cacreeks.com/films.htm

Holding the aperture areas constant, frames of film perform better with increasing area. Beautifully detailed photographs do not require longer shutter times. The shutter speeds do not increase sufficiently with ISO rating to beat the decreases in pixel number counted twice. ISO 800 film has half the bit rate of ISO 100.

When pixel sizes are unchanged, the ratings also improve with area for digital sensors. And increasing pixel numbers improves the ratings even for the same format size. Post processing can implement these tradeoffs differently, especially when multiple exposures are made.

So, comparing my Yashica with the slow slide film and a 59mm f1.7 lens to my Samsung ST65 with 14.2 million pixels, a crop factor of 5.6 and a 24.5mm f5.9 lens, my Samsung can only create 0.0258 of the same work of art as the Yashica in the same shutter speed. I calculate this by comparing the Samsung to the Canon camera.

This calculation is copied from the "units" command line calculator.

You have:

(24.5mm/5.9)^2/(59mm/1.7)^2* # the light available

(5.6/1.29)^2* # the smaller image is brighter

14200000^2/8200000^2* # recalibrated number of pixels

# counted twice for information

# and faster exposure

(1.45micron/8.2micron)^2* # recalibrated adjustment for noise

# done twice

1.02 # calibration point for slide film

# to digital camera

You want:

Definition: 0.025803348

--

Michael J, Burns

Cosmologists need to check their work by using different coordinates. The Friedmann coordinates conventionally used are rife with fictitious potentials which interact nonlinearly with real sources of gravity. It is Cartesian coordinates alone that have no such effects. And hyperspherical coordinates are still orthogonal everywhere, so they have no off-diagonal fictitious potentials.

The whole point of coordinate systems other than Cartesian is to pretend that they are Cartesian themselves. To maintain this fiction without error, fictitious forces and their potentials must be inserted. The fictitious potentials then seem real and not different from ordinary gravity. They affect sources of gravity in the same way as real potentials.

When the correct fictitious potentials are inserted into the boundary conditions, then the Bianchi identities can be integrated on. This can be a graphical process when the correct tensor ranks are used.

--

Michael J. Burns

The quality of CD player reproduction is on my mind today, as is the contribution of fictitious potentials to cosmology. Both questions are guarded (at sites like Wikipedia and arXiv) by the orthodox priesthoods that censor discussion for the sake, in the end, of their personal equanimity.

Practical digital to analog converters do not interpolate using the entire digital stream, so the relevant theorem fails to protect these converters from outputting substantial distortion. Aliased signals, albeit of reduced strength, occur around all of the frequencies that are a factor of the sampling frequency. In another context it would be called a Moire effect. I am sure that an artificial sound track could be made that sounds terrible when recorded and played back from an audio CD - shoutiness and quavering.

The fictitious effects, nonlinear on sources of gravity, by coordinate systems that are not a priori Cartesian are incorrectly ignored by academics in general relativity and cosmology. There may also be an effect on sources by the incorrect use of the second rank tensor as opposed to the fourth rank. The cosmological constant is also an artifact of confusion that is occasioned by use of the second rank.

Including the effect of fictitious and real forces on sources, use of covariant coordinates, adopting the exterior derivative, and representing the two Bianchi identities in the correct fourth and fifth ranks, are the practices needed to clarify cosmology. Then solutions can be understood graphically due to full compliance with the principle of general covariance.

--

Michael J. Burns

http://blogs.scientificamerican.com/degrees-of-freedom/2011/11/06/the-cosmic-magnifying-lens/

I really think that this calculation by Dr. Castelvecchi of the magnification of the cosmic background radiation is spoiled by an artifact of Friedmann coordinates, that he takes as real instead of discounting it.

The actual magnification by spacial curvature is comparatively small. What does happen is that the big bang physically expands shards of the big bang, that then preserve sufficiently high temperature due to time dilation from their speed of recession. Which expanded shards are then perceived relatively undistorted in size by any gravitational optical effect.

Light, as a matter of definition in this circumstance, can not reverse course as drawn in his diagram; it seems to behave so in his diagram because Friedmann coordinates are not inertial. The data for supernovae Ia brightness is remarkably close to that expected in a flat and empty universe (a deviation of only about 5% in distance), so calculations using special relativity alone are useful as a first approximation and check point.

In addition, a proper calculation of kinetics (assuming the kinetic origin of the dimming, and not evolution of white dwarfs or an epoch of dust) shows deceleration of us as observers, not acceleration. This is so even when the magnification by convergent spacial curvature is accounted for, because acceleration effects dominate curvature effects in a homogeneous universe.

He then takes this coordinate artifact and attributes the cause to an increase of dark energy, which to me is a telling criticism of the concept of dark energy. A version of general relativity rigorously based on the Bianchi identities (rather than on a momentum tensor which does not include the fictitious effects of a noninertial coordinate system) forbids any version of dark energy that is not conserved.

But I am very interested, of course, in the real amount of spacial curvature. Flatness in Friedmann coordinates implies convergent spacial curvature and deceleration of the universe when inertial coordinates are used. In a universe that is inertially flat and does not decelerate, the graininess of the cosmic background radiation would be grown to a scale of 206 million light years today, as I calculate. Deceleration would cause a decrease in this scale.

--

Michael J. Burns

I have been studying how to do cosmology with inertial coordinates, not the Friedmann coordinates that cause such dogmatic chaos.

For instance, kinetics in an inertial frame show extra supernovae Ia dimming with distance as evidence for slowing of expansion if anything. The departures from the expected dimming are 10%, and could have other causes. The Friedmann coordinates are noninertial to the point of changing the sign of the usual unwary calculation! Things not having a magic velocity are said to have cosmological acceleration by measure of Friedmann coordinates; the light signals from supernovae are included in this fictitious effect.

It is proclaimed that the typical distance scale of variation, from peak to valley, of the temperature in cosmic microwave background proves the flatness of space. But flatness within the Friedmann coordinates does not imply flatness in inertial coordinates. A flat and empty universe by inertial standards would possess negative curved space by measure of Friedmann coordinates.

This typical distance for variation in the background is extrapolated to a present size of 206 million light years, when an inertial or geodesic version of space is used. So, if this is also the typical spacing between superclusters and the middle of the adjacent voids, then space is indeed flat. But flatness in Friedmann coordinates implies positive spacial curvature in inertial. The positive curvature would magnify the variations in the cosmic background. And acceleration or deceleration trumps curvature with a stronger effect. So, if the spacing of superclusters is smaller today, then both counts imply deceleration of the universe.

--

Michael J. Burns

The predictions by quantum mechanics of net vacuum energy and of slowing of ultrashort wave radiation by a quantum foam do not work out. This is because general relativity has theorems to the contrary. But why would those theorems win out? General relativity wins within the domain of its axioms, because a lockin theorem applies, namely the Bianchi identities that follow simply from the existence of the metric. It prevents the subject matter, spacetime, from experiencing alternatives to governance by the metric. Only where the metric can not be defended, on the small scale, can quantum mechanics overrule relativity with its theorems.

This dominance by relativity on large scales can even be nonlocal in the case of quantum foam. This foam can make as much mischief as it can in any small scale where the metric is not viable, but the theorems of general relativity intervene at some other location to require the opposite.

It is remarkable that a large scale effect of quantum mechanics does exist; the states of matter impelled by Bose and Fermi statistics are not overruled by relativity.

--

Michael J. Burns

Are we running light with overbyte?