Whoever downmodded that, you just lost your geek card.
Whoever downmodded that, you just lost your geek card.
It is possible to create a universe from nothing. What you do is borrow energy from a quantum fluctuation.
No. You can't have a quantum fluctuation when there is nothing to have a quantum fluctuation in. Your assertion requires the pre-existance of a universe in the first place.
Ergo, the universe does not exist.
Assumes facts not in evidence, to wit, that creation is required in the first place. Consider: everything we have and know about was not "created", it was always present in some form or other. Assuming that this is not the case for a time/dimensional configuration for which we have neither evidence or understanding is, at best, fact-free speculation - certainly in no way an inevitable logical conclusion.
If you can't "wrap your mind around" how your average bunny rabbit could rule a world of vicious, hungry, intelligent tigers, does that make you "appreciate the idea"? Are you willing to extend "blind faith" in this direction as well?
I think the premise that you can "appreciate it" because "you don't get it" is just politically correct appeasement.
Why not just go with "I don't get it" and so "it's not worthy of confidence, only speculation, and that utilizing the knowledge we do have, until or unless I do"?
As to infinity, if you don't understand it, what's the problem? Pizza still tastes like pizza, and science proceeds apace regardless. Not understanding something in no way makes the mythologies of pre-scientific societies in any way likely to provide answers.
It came from God... God created it.
There is absolutely zero evidence for this, so I see no reason at all to take it seriously.
So, it was always there then. You believe in infinity.
No. I don't "believe" in anything. I was simply correcting the simplistic, errant logic of the post parent to mine.
My confidence rest with the idea that our physics is currently unable to describe what went on prior to a certain point in time, if "time" is the relevant dimensional term, even assuming that we've got the facts straight back that far from the scant evidence that remains. I'm perfectly comfortable with that. I am curious to know the answer(s), if there is/are one/multiples I can understand, but it bothers me not all that I don't presently know, and may never know.
Although I'm comfortable, as I said, I find informed speculation interesting. What I have extremely low confidence in, though, are attempts at answers made up by pre-scientific societies. I find the idea that they had any means to know straight-up ludicrous. Having been raised in a country that positively reeks of Christianity (the USA), I have made it my business to learn as much about it in particular as I could. That process served only to significantly lower my confidence in its basic premise.
But where did the something it came from, come from. And where did that come from, etc.
Why did it have to come from anywhere? Our existence implies that something was there at any point in our current time, and any point related to that, dimensionally speaking, prior, if indeed "prior" is a relevant term.
Perhaps the universe is infinite in other dimensions (like time) as well as space. If it is, so what? Does Captain Crunch taste any different? No.
The important thing, to me, is to note that we do not know, and therefore it is pointless to claim that we do. Speculation, of course, is very interesting, but only serves to winnow out the things physics tells us are nonsense. Keeping in mind that physics is evolving as well.
No. We can trace the assembly of a loaf of bread just fine from its now-current components. We can't trace the creation of the universe. Our physics makes nonsense of the evidence we have uncovered; therefore, we do not understand that evidence. Until we do, we can't trace the universe any further.
I have no problem with yet to be solved questions, and find no need to make up stories in order to pretend to solve them. I'll wait comfortably until we figure it out, assuming we do, which is also not a given. It may be beyond our capacities, and certainly as far as this universe goes, most of the evidence our current skills allow us to work with has long since dispersed.
However, from a thermodynamics POV, the "logic" does not lead to "god", because that answer solves nothing:
- A god does not come from nothing. Thermodynamics prevents this.
- A god does not create itself. Thermodynamics prevents this.
- A god was not created.
The subtext to either series of reasoning, of course, is the "it was there all the time" sally. The difference: The universe is real, here now, and assuming it was there all the time in some form isn't a huge leap of any kind, it just asserts the status quo in regions we cannot confirm.
God (or gods), however, has/have not been demonstrated to be real, and so three leaps have to be taken: First the existence in the first place, and second, the "there all the time", and third, that this is somehow relevant to us.
I choose the simple answer: The universe, in some form, was there all the time. That could be wrong; but that's what little our current physics seem to imply.
It isn't predictive capacity just in the sense that it will describe how known things happen - it should describe what will happen in a previously unknown situation, which is where experimentation comes in, whether it is contrived or found in nature. Take the theory that angels pushed planets around and that the movement of the stars was governed by the whim of the gods - when a theory came along (Newton's gravitation) that both described current phenomena, and also was able to predict something previously unexpected (the return of Halley's comet) it was a resounding vindication of the theory.
Yes, and the converse is also crucial: For example the Michelson-Moreley experiment observed a phenomenon (or, rather, lack of one) which defied explanation under Newtonian Mechanics. Because Newton's theory is a good explanation there was no way to make minor adjustments to it which could explain the null result. Instead, we got special and then general relativity, which completely changed the explanation to one in which gravitational forces don't really even exist.
To put it another way, what you said is that good explanations have "reach"; they explain more than the phenomenon they were created to explain. Further, they also tell us what those other phenomena are, because the explanation itself implies that reach (though sometimes we don't see all of the implications). And, finally, they are not easily modifiable to account for new observations which don't fit the theory.
This makes explanatory theories far more than simple predictive tools, and is the reason that the empiricist view of science as merely a process for deriving predictive rules is incorrect.
The current IRS regulations effectively require people to overpay their income taxes, which results in nearly everyone getting a refund, which they want processed quickly, because somehow it's okay if the government is holding money you didn't actually owe, until you actually know how much they're holding. If, on the other hand, people have to mail in a check they don't care if it takes the IRS a few months to verify everything.
Simple solution: Eliminate the regulations that require overpayment, such as the regulation that penalizes you for underpaying if your withholdings are inadequate to cover your liabilities and aren't at lease as large as the prior year's withholdings. Some, perhaps many, people will still choose to overpay, as a sort of brain-dead savings plan, but many will reduce their withholdings, and those that still overpay will have no basis for complaint about a slower refund, since it was their choice.
But, then, I think the whole concept of mandatory withholdings is evil and wrong. It's just one of many ways that taxpayers are misled about how much they're paying. It's not the worst of such deceptions, but it's a significant one.
I don't know that I would state it that way, just because the fundamental measure of the quality of an explanation is its capacity to predict the results of natural phenomena.
That is only one measure of quality. Another that is equally important but harder to describe is that a good explanation is hard to change. There are all sorts of bad explanations which predict phenomena with perfect accuracy but which can be trivially modified to also address any new, different observation which didn't fit the prior form of the explanation.
One example (cadged from David Deutch's book "The Beginning of Infinity") is the Greek myth of Persephone and the changing seasons. The myth perfectly predicts that seasons will change, and when, but because it's all based on whims of gods with magical powers, you can trivially alter it to explain/predict any version of events you like... which means that in reality it doesn't actually predict anything.
Good explanations, on the other hand cannot be easily altered. Suppose, for example, that it was discovered that every 963rd year, the seasons swapped. The scientific explanation for seasons (tilt of the planet causing increased insolation in the hemisphere tilted toward the sun, due to lengthened days/shortened nights and more direct angle of incidence) simply could not provide any explanation for such a swap, unless we can find some mechanism to quickly shift the planet's axial tilt by ~30 degrees.
This characteristic of good explanation is not the same as falsifiability, BTW. The mythical explanation for seasons is also falsifiable.
I have an HP MicroServer running Debian Stable, with several VMs running under Xen.
I love the MicroServer. Quiet, easy to work on, and inexpensive enough that I'm going to just buy a second one as a hot spare.
It doesn't support hot swapping of hard drives, but for my home use I don't need four nines reliability; powering down to swap drives is just fine for me.
I run an email server with a small number of users (family and a few friends). This makes me appreciate sysadmins more.
I am planning to switch from using Xen VMs to using Docker containers.
It would be nice if the article mentioned what browsers/plugins were vulnerable, wouldn't it? (And does this cover api.jquery.com or just the home page?) Although it wouldn't surprise me that they just don't know yet since jQuery is still investigating.
I'm pretty sure I'm up to date with everything, but...
Wow you're dense.
I'm saying that what the US is doing in Syria is exactly equivalent to what Russia is doing in Ukraine.
Because, according to international law, it is.
The sanctions being imposed against Russia are for Russia taking literally the exact same actions the US is currently taking in Syria.
You can argue the relative morality all you want, but we're still, ultimately, invading a sovereign nation.
According to the article, the library itself wasn't affected.
Plus most people don't use jQuery.com as a CDN. Instead jQuery recommends you use Google's CDN if you want to use a CDN for jQuery.
Of course, this is still bad - I visit jQuery.com fairly frequently to check the documentation. The article doesn't say what was required for the malware to run so I have no idea if I was vulnerable to it or not, but if it was dropped on all pages and not just the home page, I definitely could have been hit by it.
Actually the book was from 1988, and uses a huge set of research.
Also, rote memorization was the research topic as such because it seeks to push your brain's memory functions directly, rather than train techniques. That's why research showing improvement has gone on to discover subjects which improved had developed memory systems, not made their brains stronger by flexing them repeatedly.
Finally, let's excerpt from your paper:
Participants were randomly assigned to 1 of 4 groups: 10-session group training for memory (verbal episodic memory; n=711), or reasoning (ability to solve problems that follow a serial pattern; n=705), or speed of processing (visual search and identification; n=712); or a no-contact control group (n=704). For the 3 treatment groups, 4-session booster training was offered to a 60% random sample 11 months later.
So far, so good.
Memory training focused on verbal episodic memory. Participants were taught mnemonic strategies for remembering word lists and sequences of items, text material, and main ideas and details of stories. Participants received instruction in a strategy or mnemonic rule, exercises, individual and group feedback on performance, and a practice test. For example, participants were instructed how to organize word lists into meaningful categories and to form visual images and mental associations to recall words and texts. The exercises involved laboratory like memory tasks (eg, recalling a list of nouns, recalling a paragraph), as well as memory tasks related to cognitive activities of everyday life (eg, recalling a shopping list, recalling the details of a prescription label).
The memory training participants were taught new techniques. This is skill, not brute force. If you did push-ups exactly the same way, you'd get bigger muscles; but this is teaching people to do those push-ups by moving their hands to a correct position which requires less effort and more efficiently lifts the body.
Reasoning training focused on the ability to solve problems that follow a serial pattern. Such problems involve identifying the pattern in a letter or number series or understanding the pattern in an everyday activity such as prescription drug dosing or travel schedules. Participants were taught strategies to identify a pattern and were given an opportunity to practice the strategies in both individual and group exercises. The exercises involved abstract reasoning tasks (eg, letter series) as well as reasoning problems related to activities of daily living.
Reasoning training was based on teaching techniques to analyze and approach problems. Again, technique. This is like learning about Kepner-Tregoe problem analysis.
Speed-of-processing training focused on visual search skills and the ability to identify and locate visual information quickly in a divided-attention format. Participants practiced increasingly complex speed tasks on a computer. Task difficulty was manipulated by decreasing the duration of the stimuli, adding either visual or auditory distraction, increasing the number of tasks to be performed concurrently, or presenting targets over a wider spatial expanse. Difficulty was increased each time a participant achieved criterion performance on a particular task.
K. Anders Ericsson explains something called the "OK Plateau". Most people learn initially by cognitive effort, and then internalize that into autonomous task: it moves from activating the prefrontal cortex to activating the basal ganglia. At a point, people subconsciously decide they're doing good enough, and cease improving.
Ericsson outlines three strategies experts use. Deliberate focus brings the task into cognitive recognition; goal-oriented behavior demands improvement; and immediate feedback points out current performance so the experts can analyze and adjust for their shortcomings.
Having trained myself in speed-reading, I can relate to the speed-of-processing study. I've had to deliberately focus on the RSVP, analyzing my own cognitive process. Initially, my mind would mill over words, return back to words I'd read, and stop focusing on what I was reading. This can be done between words in free time to rebuild and reanalyze, but not for extended blocks of 1-2 seconds when RSVPing at 450-800 words per minute. My mind also tends to wander to other related thoughts--which I had to stop.
By increasing speed, the researchers demanded additional focus. By adding distractions, the researchers demanded improved filtering of distractions specifically (rather than just internal thought). These changes largely demand the subject improve focus, accept a certain error rate, and employ strategies to maximize recognition of the most information in the least time. When multiple cognitive tasks are present, the subject must recognize the recognizable information so as to attend to it first, and move to the less-recognizable once the delay in processing won't cost so much (diminishing returns); when multiple, time-sensitive tasks are presented, rapid prioritization becomes important.
This particular part of the research provided an environment in which direct focus was enforced, goals were obviated, and immediate feedback was provided. Pattern behavior would obviously develop from such a strict environment, up to physiological limits.
None of that research says the brain bench pressed a bunch of information and became stronger and tougher. It suggests skill development, or at least suggests the strong possibility of skill development. My above discourse about cognitive processing skills is an implied likelihood not addressed by the paper; while the paper itself specifies the teaching of specific, researcher-selected mnemonics and problem-solving skills, rather than the exercise of basic mental faculties.
Nothing in there suggests the brain is a muscle and benefits from exercise. Much of that directly references technique, while the remainder supplies a situation where technique could easily develop and would be useful. I would bet money that tasks requiring similar cognitive effort and load on the same mental faculties, yet wholly unaided by any technique which could improve any of the things tested, would show zero improvement after the experiment.