Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Re:Note the cameras, lights, and antennas. (Score 1) 122

The part of the GRASP Lab's quadrotor work that has impressed me the most is simply the controllers they have for their quadrotors. They're not like wheeled robots in that respect; they're not even stable, passively. The lab's earlier videos (e.g., "Aggressive flight maneuvers") are still very cool. Certainly not dealing with perception parts of the problem, but that wasn't the point; the controllers were.

Of course, that's past research. What about this work? I assume it builds on those earlier controllers, but it may well be doing interesting things besides. I'd need to take a look at their new publications to see what's going on under the hood.

Comment Re:Everyone a specialist now (Score 1) 474

Or it's because abstraction is a powerful tool. We don't need to consider every detail of a thing to abstract out the phenomena we are interested in and come up with viable models for it.

Yes! Which seems related, and is itself remarkable...

There is also a (developing) theory of abstraction and bisimulation. I don't know how helpful it is...

Comment Re:Everyone a specialist now (Score 1) 474

Call me an idiot, but isn't the solution to this paradox just Occam's Razor?

Yeah -- practically, I think it is. I mean, that's even what machine learning algorithms do, in essence: They assume a prior that assigns higher probability to lower-complexity models. The details of what Occam's Razor means then becomes a subject for debate -- there are lots of priors one could choose -- but, as an imprecise, guiding principle, it seems to do the job!

More philosophically, I'd say Occam's Razor has a dual, which is the idea that Asimov called The Relativity of Wrong. Put them together and you're looking at the tradeoff between model complexity (Occam's Razor) and model fit (Relativity of Wrong), that is precisely what the theories of learning complexity explore.

Maybe it formally can't be described, except as a simulation. Which gives us hope - our simulations are bound to improve over time as we learn more of the underlying rules. We shouldn't lose sight of the fact that reductionism actually works - learning local rules is reductionist, but running simulations using those rules allows us to predict global behaviour - just not in a closed form that might equate to "understanding". But the power of science is in prediction, not in "understanding" things, so it's fine

I think I agree, but I also think that the the lack of "understanding" -- the lack of "mind-sized models" -- is going to get more and more frustrating! (If unavoidable.)

It also seems that reductionalism and holism can and do complement one another. E.g., the results of simulations involving various "reductionalist" pieces can be used to refine those "reductionalist" pieces themselves. The simplest example of this would be, e.g., if there were a real-valued signal of which you had noisy measurements at different times ("local," "reductionalist" information), and then you also were able to obtain an independent measurement of its integral ("global," "holistic" information). Knowledge about the integral would improve your estimates of the signal values themselves. And all we need for this example is standard, least-squares, linear estimation.

Comment Re:Everyone a specialist now (Score 5, Informative) 474

If you don't do reductionist science, it is hard (but possible) to receive funding since everyone is trained in anti-systems (reductionist) theory.

Wait. Really? There are entire fields that do nothing but systems theory. The names shift. Cybernetics. Systems theory. Control systems. Complex networks. Cyberphysical systems. There are lots of people doing work in precisely the areas you suggest. Take a look at the NSF's "Broad Agency Announcements." There is funding.

...

I do find it a bit amazing that science works at all. In machine learning, there are notions of the complexity of learning, and one of the basic ideas is that, as the class of models you are willing to consider grows, the amount of data you need to be sure, with reasonable statistical significance, which of those models describes it, grows very rapidly -- so rapidly that it is a miracle that we have apparently learned anything at all. See "VC dimension," "Rademacher complexity," etc.

The best explanation I can come up with is that the class of physical theories the human mind can conceive is actually quite limited (or, our priors are very good), and that it is evolution, over millions of years, that has gathered the necessary data to build a brain capable of conceiving of only the right theories, and that the role of conscious experimentation is only to narrow things down within this already-restricted set.

Because if the human mind is not much more limited than we like to think, then I do not know how we know anything.

Comment Re:America's future can be in both (Score 1) 630

4th: Read "Brave New World" by Aldus Huxley... you seem to aspire to that sort of world... it's a dead end. Make work jobs won't help our economy.

Yes, I've read it. Meaningless lives, sweating on the hedonic treadmill.

In my post, I was not prescribing make-work as a solution to America's economic problems. I made no prescriptions at all, because I do not know what should be done. My only point was that automation, which is often presented as a silver bullet, will create very-unequally distributed rewards.

2nd: Jobs are a BYPRODUCT of productivity.

I get what you're saying: You build a factory that employs five guys, and, sure, you may have only employed those five guys, but they will spend their money in the local community, which helps to spur the creation of secondary and tertiary jobs.

The problem is that, the more efficient the operation is, the less money actually trickles down to the community. We already see this, for instance, in data centers, which are the most automated facilities currently in operation: You buy a big building, fill it with computers, staff it with a dozen people, and generate significant wealth for the owners but contribute no more to the local economy than that handful of salaries.

1st: We're trying to make money

Ultimately, I do not know what this means. Since money only has value in inequality (double everyone's account balances tonight, and, once everyone knows, each dollar will be worth half as much), we're obviously not making money. Precisely the role it serves still eludes me.

And if we saturate the world we can just create new products people didn't even know they wanted before.

Now who's imagining a Brave New World? ;-)

Comment Re:America's future can be in both (Score 1) 630

If we mechanize enough then the labor costs become irrelevant

Are we trying to make widgets, or jobs? And who is "we?"

Say someone with the capital to invest in automation builds a giant widget factory in Kentucky. They hire five guys to man the control room. How much good does it do that Kentucky town? The only people benefiting are those five guys, and the owner of the robots. From everyone else's perspective, that factory might as well be in China.

Now you can say, "there will now be robot maintenance jobs." Sure. There will be. But obviously factory owners will not be contributing as much to the local economy by paying robot-fixers as they would by paying workers -- because if they did have to pay as much, then the robots wouldn't be a sensible investment.

Automation can increase manufacturing output. It can make people with capital very wealthy. It can create a small number of highly-skilled jobs. It will not put a ton of unemployed people back to work.

Comment Re:Don't count this out yet (Score 1) 211

FP is always rational.

The floating point numbers (say, IEEE whatever-whatever) are a subset of the rationals, sure.

For numerical work, it's meaningless since the rationals are dense on R.

Sure. I think my point wasn't as immediately practical as that. It was more philosophical. Some revered mathematicians was quoted as saying something to the effect of "progress in mathematics happens by calling things the same that are different." E.g., saying that a real number IS a Cauchy sequence of rationals. In the same way, you can say that some C++ object that stores a rational (and some state), and can advance to a next rational IS the sequence it generates, and IS a real number -- (if it is Cauchy, which, if the object is a black box, you cannot tell from the outside.) However, I'd neglected something obvious:

That same code is represented on the computer by a finite-length sequence of symbols from a finite alphabet, so it can also be interpreted as being a natural number. Since we have a bijection from the naturals to the sequence-generating programs, and we know that the set of reals is larger than the set of naturals, the map from sequence-generating programs to the reals must not be onto.

So I guess there are real numbers that it is impossible to write programs to converge to (even with arbitrary-precision rationals). Huh.

(All this must be extremely introductory computational theory... It's got to be...)

Comment Re:Don't count this out yet (Score 1) 211

Since one way to define a real number is as an (equivalence class of) sequences of rationals, I suppose any object that (1) stores a rational number, and (2) can advance to a "next" rational number, could be called a real number. (That's so long as the sequences it generates are Cauchy.)

I guess I'm saying you just need to make all your evaluation really, really lazy, and you can work with arbitrary precision. :-)

Comment Re:Whatever their job is.... (Score 1) 1303

While apple may save money by manufacturing overseas, they can take every penny they save and spend it on things like research and design. That creates high paying R&D jobs, which are much more attractive than the $10 an hour pay they would probably pay to a non union worker in a factory in the US.

Since when did Apple do R&D?

Apple does a little D. But the closest thing to a technological advance that has recently been associated with Apple would be Siri, which they bought from SRI, International. That work, in turn, was mostly funded by tax dollars.

In defense, it works like this:

Government labs do taxpayer-funded research.

If the research bears fruit, the lab licenses the technology, at a loss (the license fees are typically minimal), to a defense contractor.

The defense contractor sells the final product at a profit to the taxpayer.

In that industry, why do research when the government will do it for you?

The only industries I can think of that really do research are (1) pharmaceuticals, and (2) microelectronics. Because it pays off. Intel is ahead of AMD on the process roadmap, and look at the results. Pfizer discovers Viagra and they make a healthy profit. I don't know where else research is happening outside the military-academic complex.

Comment Re:education is only useful for jobs (Score 1) 314

undergrads overpay to make up for grad students who underpay

My understanding is as follows:

There are basically two business models used by credible universities, and neither is based on undergraduate tuition.

At a Tier-1 research university, the real core of the business model is research grants. It works like this:

Professors write grant proposals. The grants are typically split about 50%/50% between the professor (who uses it to pay his grad students and buy equipment), and the university.

Grad students write conference and journal papers to keep the funding agencies happy, and to increase the professor's credibility and ability to compete for subsequent grants.

That's the business model. Teaching undergrads is just an afterthought, a necessary ritual to keep the appearance and prestige of a university.

At an elite private college, the real core is different; it's donations from rich alums. Here the idea is to maintain a good 'ol boys' network with links to finance and other lucrative professions, and to produce alumni who have fond memories of their institution. Then it relies on donations from those alums.

(Very-low-tier, for-profit colleges -- the kind you see ads for on the subway -- do base their business models on tuition, but I'm only talking about "real" universities.)

Comment Re:Distance calculation is trivial... (Score 4, Interesting) 316

...which raises the question: What is the most efficient way to store points on the sphere for lookup? Computationally? And in terms of storage?

1.) You can store lat/long, and use the Haversine formula, as you suggested. This requires trig functions, and has O(n) complexity; you need to iterate through all the points. You also have varying resolution over the surface, which makes bounding and early-outs a bit harder.

2.) A great many other coordinate charts also exist, and it's hard to say why you should choose one over the other without looking in detail at how the distance calculations are performed, etc.

3.) By using multiple charts -- e.g., a cube projection -- you can avoid issues with singularities, at the cost of branching. The complexity of distance calculations depends on the projection, but, without looking too carefully, my bet is that, in terms just of raw speed, cubemap vs. lat/long is probably a wash.

4.) Why use a coordinate chart at all, when you can use an embedding? If you store points in 3d, proximity calculations (since the points are on the sphere) just become a dot product. Much faster! It also opens up the possibility of, e.g. (if you will be doing many lookups but few insertions), storing indexes sorted along the three axes (or more!) to speed bounding-box (or more generally, sweep-'n-prune) calculations. Bins, bounding volume hierarchies, and the other standard tricks of computational geometry come into play. On the other hand, you're wasting a lot of codewords on points that don't actually lie on the sphere.

5.) Is there a more efficient use of codewords? Perhaps a (nearly-)-constant-resolution encoding scheme? If you start with the idea that a node in an octtree can be thought of as an octal number, you can see how you can encode points as real numbers in the interval [0, 1] -- e.g. "octal: .017135270661201") Of course, this still wastes codewords on points not on the sphere, so let's consider a refinement of this idea: At each level of the octree, discard any cube that does not intersect the sphere, and use arithmetic encoding, with the base varying between 8 and 2 depending on the number of cubes that intersect the sphere. This now seems like a (memory)-efficient way to encode points on the sphere -- but it is surely not computationally efficient. On the plus side, this same idea works for any manifold embedded in any Euclidean space, so at least it generalizes.

6.) Since #5 is a mapping from [0,1] to the sphere, one wonders if there are space-filling curves on the sphere. Of course there are -- e.g., the Hilbert curve in 2d, composed with any inverse coordinate chart. Not that this helps much!

I think my favorite of these is #5, but, practically, #1 or #4 are probably better choices.

So how do the real GiS systems do it?

Comment Re:LaTeX (Score 1) 470

It's hard to screw up a LaTeX document because, beyond doing the basics like defining headings and writing paragraphs, it's hard to do anything in LaTeX. If there's a style file or an environment that does what you want, you're golden, but if you want to design your own, god help you.

What I wish is that LaTeX had a sane box model, and a language designed around the idea of defining constraints (soft and hard) that relate various boxes. It is close to being this, but the gap is frustrating. It also needs the ability to flow content between boxes.

Furthermore, math should be semantic. I should be able to evaluate a properly-written LaTeX math expression. I say this because, at present, the semantics and presentation of math are so tied together that you cannot, e.g., switch a document from one- to two- column format and expect your math to reflow accordingly. LaTeX cannot reflow your math, because LaTeX does not sufficiently understand the structure of the expressions you're writing. Really, in order to automatically typeset math, you need to understand its parse tree.

So that's what I want. A sane box model, with constraints and flow between boxes. And parseable math.

The HTML DOM seems a lot more consistently designed to me, but there are no good typesetting systems that take HTML+CSS as input, as far as I know (and MathML may be more semantic, but it is also much too verbose). HTML also currently lacks one very important thing that LaTeX has, which is the ability to define new tags/commands in terms of old ones. So although with some imagination HTML is almost a viable alternative to LaTeX, it is not quite.

I stick with LaTeX. But it leaves much to be desired.

Slashdot Top Deals

Work is the crab grass in the lawn of life. -- Schulz

Working...