Pneumatic cylinders are stronger per volume but vacuum can be immensely strong, regardless of volume. So if a cylinder is made very tiny but geared up massively then the same pneumatic air tank could give hugely more energy for its volume and structural integrity--right? Some day, building, filling, and dropping cylinders of space vacuum to Earth could therefore be the only truly infinite source of renewable energy. Perhaps vacuum cylinders could even be used to blow back captured air (a vacuum jet) to slow descending space vehicles... Does this seem reasonable?
Why not use a dirigible (zeplin) for space launch? 17,500mph at 99 kilometers up is considered orbitable. Why not use use a dirigible in these stages for cheap, heavy lift into orbit: (1) hydrogen lift until the air density is low enough to make pushing a balloon energy efficient (perhaps 60 kilometers); (2) vacuum out the hydrogen as the balance between air density and structural integrate allow and heat it for rocket thrust. Use a large aerodynamic shape such that this thrust pushes the ever lighter vehicle faster, easier.. and so long as there is air resistance, there must also be lift.. When there is no longer air resistance/lift, stable orbit is thereby achievable. Helium balloons have gone as high as 59 kilometers. Hydrogen is far lighter and nothing is lighter than vacuum. A small thorium reactor is perhaps the most ideal choice to heat the hydrogen... which expands quickly and greatly when heated. I do wonder if heat at high speeds in ultra thin atmosphere would pose a problem but if it does, then use that the expand the hydrogen for thrust.... so ever better.
Modern neural science and biologically realistic neural simulations (such as some of the best Deep Learning systems) use the neuron as its most fundamental primitive. One neural does a lot, actually. It draws in new correlating axons (those firing when its other receptors are firing) when its total potentiation is insufficient to excite. It weakens and destroys them in the inverse case. It also draws in non-correlating axons as inhibitory receptors. And long term potentiation (widely viewed as the basis of long term memory) is increased as the same axon produces more receptors to the same neuron. Furthermore, a neuron perpetually exciting will shut itself down for an extended time.. Each is like a little computer of its own, really..
As for what "intelligence actually is". The real problem is the lack of consensus on a common definition. The word "is" only indicates a relationship between two things without specifying what that relationship, eh hem, actually is. It's a matter of defining it in a way that is broadly acceptible. Defining something can sometimes also determine it. I think that's the case with intelligence. Most people (who care) want to determine what it is so they can define it.. and yet you cannot search for it without knowing what you are searching for, in other words defining it.
I think this is a ridiculous per suite. Pick one of the many working definitions that you like, and work with that. If it feels insufficient then pick or create another. Here's a few I use..... any of which could be more or less complex, evolved, or designed:
Reactive Intelligence -- the ability to react to pre-defined stimulus in a way that, under ordinary conditions, furthers a goal
E.g.: An iron that turns itself off when sitting face down and not moving (often referred to as an intelligent feature)
Conditioning Intelligence -- the ability to identify what reactions to what stimulus has most often in the past furthered a goal and thereafter to react accordingly
E.g.: Pavlov's Dog...or any trial & error aka reward and punishment learning
Substitution Intelligence -- the ability to identify and model observed phenomenon from among interaction pattern sequences and swap out a missing component in one that furthers a goal, if the original is missing. The swap is of one that shares most characteristics with others that had taken the same place in the past.
E.g.: In building a hut, you've used many different kinds of hammers to bang in the nails but today you don't have a hammer. However, you have a rock that shares most characteristics with the other hammer styles (heavy, hard, and with a flat side), so you use the rock where you'd normally have used a hammer.
Substitution Intelligence is shared only among the so-called higher animals, and mostly humans. It requires general imitation learning. That is, the ability to identify that two things/people/animals have a lot of similarities and therefore one could take the place of the other....
Of course, if everyone would just stfu until they have a peer reviewed journal article, there would never be any peer reviewed journal articles... Perhaps one reason AI hasn't progressed might be this kind of brutal cynicism toward new ideas.
Granted, every premise I provided in the modal derives from established science, though replicated, peer review journals.. in fact, much through basic text books in Neural Science... But let's stfu about that, too, since these things don't appear to be yet discussed at the same time in any peer reviewed journal article. I suppose we can only read anything if it comes directly from a peer review journal article..... and only what's in one particular such article at a time... perhaps requiring a holy moment of silence between each article, to ensure a clean separation.
I've been working on these models for decades... I've done science (six accepted peer review articles) but am really an engineer, not a scientist. I prefer it that way. I can leave publishing research to others, who require it for their tenure. At one time, slashdot was actually a mostly intellectually stimulating conversational environment...
Neural Net's were traditionally based off old Hodgkins and Huxley models and then twisted for direct application for specific objectives, such as stock market prediction. In the process they veered from a only very vague notion of real neurons to something increasingly fictitious.
Hopefully, the AI world is on the edge of moving away from continuously beating their heads against the same brick walls in the same ways while giving themselves pats on the heads. Hopefully, we realize that human-like intelligence is not a logic engine and that conventional neural nets are not biologically valid and posses numerous fundamental flaws.
Rather--a neurons draws new correlating axons to itself when it cannot reach threshold (-55mv from a resting state of -70mv) and weakens and destroys them when over threshold. In living systems, neural potential is almost always very close to threshold--it bounces a tiny bit over and under. Furthermore, inhibitory connections are also drawn in from non-correlating axons. For example, if two neural pathways always excite when the other does not, then each will come to inhibit the other. This enables contexts to shut off irrelevant possible perceptions, e.g. If you are in the house, you are not going to get rained on. More likely, somebody is squirting you with a squirt gun.
Also--a neuron perpetually excited for too long shuts itself off for a while. We love a good song but hearing it too often makes us sick of it, at least for a while.. like Michael Jackson in the late 1980's.
And very importantly--signal streams that dissappear but recur after increasing time lapses stay potentiated longer.. their potentiation dissipates slower. After 5 pulses with a pause between a new receptor is brought in from the same axon as an existing one. This causes slower dissipation. It will happen again after another 5 pulses repeatedly, except that the time lapse between them must be increased. It falls in line with the scale found on the Wikipedia page for Graduated Interval Recall--exponentially increasing time lapses 5 times, each... take a look at it. Do the math. It matches what is seen in biology, even though this scale was developed in the 1920's.
For me, the grand appeal of the Rasbperri PI is it's 26 I/O lines--It's difficult to find a microcontroller with so many I/O lines and particularly with any reasonable CPU power.. This provides both. The downside is that the gyros in smartphones are also very useful in robotics projects. And I'd sure be nice to also have access to some GPU power, computationally...
I just started this Petition. Please SIGN IT! Fight these !#@$ EXPLETIVES #@@
I've considered T-Mobile but my work really requires that I have good cellular connectivity... The very limited areas of service would substantially hurt me. What I really need is verizon's network but AT&T or Sprint will due. I will look into "Consumer Cellular" but you never know.. Others I've looked at all had some difficult to deal with catch to them. I understand no service is likely to ever be ideal...
Giving up cell phones will affect almost anyone in modern society. No--there is no realistic option of just not buying their service.
Furthermore, the representatives of each carrier explicitly told me that the policy was instituted by them all at about the same time. They were clearly aware of this, collectively.. They clearly wanted to take aware our option of switching to another.
Every major carrier instituted this policy right about the same time. The first thing I did, was try to change carriers.... before filing an FCC complaint. I really want to fight those bastards.
My contact was over and I wanted a smartphone but not a data plan. Sprint, AT&T, and Verizon all said that if I used any kind of smartphone, I must have a data plan. My brother bought a Nexus One outright and his carrier discovered this and added a $30 charge per month for data against his will. My plan was to use WiFi only for data...
Each carrier responded by calling me and telling me that that is their policy and therefore I was not wronged. I responded that I think law trumps company policy. As far as the FCC was concerned, that was it... they had done their due diligence, I suppose..
I send an email to one law firm that specializes in class action suites but never got a response.
If a lawyer anywhere on this planet would be willing to take up this as a class action suite, I will strongly support it. I am a web developer, I can build an excellent web site to begin the process of finding the many, many other victims.
I love PostgreSQL in theory but hate it in practice. It's a pain in the ass to work with... not very productive. For a long time, I felt it was worth it to endure this for the superior design, feature set, and technical correctness.
But one day I realized that I need to get things done, switched the MySQL. The learning curve was small but the main kicker was that things just worked and easily reworked. There are risks, limitations, and problems. It's very imperfect but I get things done now... and never have or care to think about the purist philosophies with which I used to love to indulge in.
In the end, you have to give up perfection to go anywhere.. Otherwise, it's like having to get half-way there first, meaning you have to get half-way to half-way first, etc. recursively forever.. With MySQL I take a reasonable number of precautions for things that can go wrong, ensure there are good backups, and deal with the others as they come.
Now I think MySQL is superior for practical use by a long shot. And I think that's why its adopted so heavily.
The key ingredients to successful technologies are:
(1) You can do something obviously cool or useful with it.
(2) It's quick and easy to learn and use.
And that's it. This is why so many successful things are made by idiots. Look at HTML. It was made by Tim Burners Lee back when he knew very little. But 12 year olds were picking it up and making cool (at the time) web pages. Now he know so much more and has tons of backing from heavy weight organizations and money but cannot seem to even force the success of the Semantic Web. It's hard to learn and hard to work with even when you learn it. Furthermore, it's not obvious to most what cool or useful things you can do with it. Proponents keep saying it'll mature and will be easier when tools and libraries are available to make it easier... That misses the point. Even the tools mostly suck and are buggy because the basic tech. is a pain in the ass to work with. There are philosophical visionaries galore but no substantial progress beyond what grants and job requirements force people to do... and there won't be.
Like most with experience in older SDLC (Software Development Life cycle) and newer Agile, my first thought off the headline was "What kind of idiot? What questions did they ask?". As I read on, several glaring problems with the "analysis" stood out.
(1) I see no mention of comparisons with what other methodology? It's just a focused criticism of Agile, implying that other paradigms are far better. The truth is, the percentage of successful software development projects have always been terrible. It started out with something like 90% failure rates and has very slowly improved to this day. Furthermore, the metrics to measure success are apples and oranges. For SDLC, it's that requirements are met. For Agile, it's the same repeatedly until the product is acceptable. In practice, SDLC leads to marking off a checkbox for each requirement and test. Other problems to solve or improvement ideas to make are thrown by the wayside unless they were specified or contractually obligated. No software has ever been completed without the developers thinking, "We could have done it better like
(1) They are engineering / cherry-picking to create support for their conclusions. Examples follow:
(a) "Out of over 200 survey participants, we received only four detailed comments describing success with Agile." -- oh really? Just before that, they said 28% reported success with agile. For how many did they receive smiley faces at the end of detailed comments describing success with Agile? Zero?! Geez, then it was really a total flop!!
(b) "Sixty-four percent (64%) of survey participants found the transition to Agile confusing, hard, or slow. Twenty-eight percent (28%) report success with Agile." Also from my own experience, the transition to agile was extremely hard. In fact, it's hard to get people to convert from Christianity to Islam, too (or vice-versa). That in no way addresses the effectiveness of Agile over SDLC/waterfall or anything else, as they strongly imply. It suggests that people do not like moving out of their comfort zones.. people like doing things they way they always have. It's typical human nature... and consequently, they resist and prejudices arise.
(c) Ridiculous levels of outright subjective and judgmental prejudice to the exclusion of any proper measures.. and repeated in different examples of the same, rather than just tallying up the levels of negative personal feelings toward Agile.... I have to say, this sounds very much like a survey given only to managers--it's a typical manager point of view. These are just ignorant and arrogant personal insults. This is not professional at all. Examples follow:
- Survey participants report that developers use the guise of Agile to avoid planning and to avoid creating documentation required for future maintenance.
- We received some unprecedented scathing and shocking comments about the level of competence, professionalism, and attitudes of some members of the Agile movement.
- Be aware that the Agile movement might very well just be either a developer rebellion against unwanted tasks and schedules or just an opportunity to sell Agile services including certification and training.
So... Is doing a bunch of these in parallel on the horizon? I mean, perhaps they could use it to produce an explosive material at a distant location without having to traverse there.. Or perhaps just something else that would be damaging...