Follow Slashdot stories on Twitter


Forgot your password?

Comment Re:Google versus Apple (Score 5, Insightful) 360


To amplify this 'uncanny-valley' notion. The problem with the anthropomorphizing ('attitude') approach is that it lulls the user into thinking they are dealing with a very sophisticated (sentient) system. This fiction quickly disappears once the user runs requests that the AI quite obviously doesn't understand. At that point, the quirky personality becomes annoying (think Clippy), and the fact that it pretends to be as smart as a human, without actually being as smart as a human, makes the interface seem broken and comically insufficient.

The opposite approach, also seen in robotics and many other areas of AI (e.g. search), is to not pretend that the system is like a person. Instead, make it obvious that it is a machine, with a set input/output behavior. Users can then quickly learn how to best use this machine to accomplish tasks. If the shortcomings of the system are evident, users will not be surprised by them and will instead build these into their mental model of how the system works.

As a case study, consider the similar criticisms that have been made about Wolfram-Alpha (e.g. here): essentially, W|A is a highly sophisticated set of computation and relation engines. However it's all wrapped up inside an overly simplistic UI (a single text-entry box, without any obvious way to refine what you mean). This leads to people getting all kinds of unintended results, despite the fact that the system actually can perform the computation/analysis/lookup the user wants. It's just that there is no obvious way to tell it what lookup you meant. The overly-simplified UI implies to the user that the system will just 'figure out what you mean', but the fact is it fails to do that very frequently; the user becomes frustrated because they then have to mentally reverse-engineer W|A's parsing logic, trying to build a query that returns the kind of results they want.

In short, it's better to design a UI that is an honest reflection of the sophistication/power of the underlying technology. To do otherwise creates a bad user experience, because user expectations are not meant by available functionality.

Comment Re:It'll still be spam to me (Score 1) 219

To play devil's advocate here: What if the personalization did include elements such as whether or not you're in the market for something? What if the personalization were tuned to each person's 'spam tolerance' so that the number, type, and content of the emails were below your threshold for annoyance?

Imagine your phone breaks, and then you sit down and your computer and already there is an email along the lines of: "These are the current best smartphones that match your desires and budget. Here are links to reviews for these phones (at sites you trust). Here are links to buy any of these, if you are interested." Or, a month before christmas, you receive an email like "Your sister would probably like the following items for Christmas. If you buy them soon, you can get better rates and they'll arrive in time for the holidays." Or you get an email like "You were interested in buying a bigger TV a month ago, but they were all too expensive. However a recent sale has the TV you like at the price you were willing to pay. Click here to buy it on Amazon (which currently has the lowest price for this item)." And so on...

In other words, imagine if the advert emails were actually useful to you. So useful, in fact, that they offset the annoyance of getting an 'out of the blue' email. If advertising emails were really that tailored, people would probably read them, and click on the links. Heck, people might even actively sign up for (even pay for!) such tailored shopping advice.

Having said all that, I agree that this kind of advertising would be fundamentally creepy and unsettling. It would very pointedly highlight just how much information companies have on us. (How did they know my phone just broke? How did they know I wanted to buy a new TV?) Creepy as it is, however, the cynic in me says that the majority of people would eventually get used to it. The main reason it won't work, actually, is because companies don't have the self-control necessary to pull it off. They will use any opportunity to mislead, lie, and annoy, as long as it gives them (or they think it gives them) a slight edge. With thousands of companies trying to out-yell each other to catch our attention, it inevitably becomes annoying. Which means that no matter how good those emails might be, we will still be aggressively spam-blocking them, and won't trust any of them.

Comment Re:some perspective (Score 3, Insightful) 312

It also depends what you mean by "belongings", though. Some people will interpret it to be all the "stuff" they own (clothes, computers, furniture, etc.). But my car is also a "belonging" and if I include it in the calculation, it accounts for a large fraction by volume. (Of course, if I'm allowed to store other stuff inside the car for the purposes of computing total volume, that changes things... That can be done without breaking anything, but somehow seems like it's violating the premise of the question, which is asking how much space all of your stuff normally takes up.)

Also, for those people who own houses, or plots of land, that would substantially increase the size of their belongings.

Point being that when interpreting the spread of answers, you have to account for the variation in how people interpret the question. (Note that I'm not complaining about "lack of options" or "lack of precision" in a Slashdot poll. Actually, one of the things I like about Slashdot polls is the analysis that goes on in the comments about how fundamentally unclear the question is. Slashdotters are probably more pedantic and detail-oriented than most people, but it's still a useful exercise... reminding us to be weary of the results of surveys, for instance, since how the question is worded, and interpreted by respondents, can massively affect the distribution of answers, and thus the analysis of the data.)

Comment Re:Confusing positions (Score 3, Informative) 477

Well there is a diversity of opinion on Slashdot, so you're inherently building a strawman, here.

Nevertheless, it's perfectly consistent to be pro-net-neutrality and anti-SOPA. The underlying principle here is to maintain equal access to communication technology, in particular to not allow consolidate power bases (in particular, corporations) to control the flow of information. The purpose of net neutrality is to force companies to not discriminate between information seekers and providers; this maximizes the amount of information everyone can easily access. The purpose of striking down SOPA is to prevent companies from having yet more legal power to issue takedowns, censor material, and discriminate between information seekers and provides; preventing SOPA from being passed also maximizes the amount of information everyone can easily access.

Your strawman was implicitly painting this as a debate about whether regulation is good or bad. But that's incorrect. The question is not whether we should have laws. The question is what laws.

Comment Re:OK can we agree this site sucks? (Score 3, Insightful) 82

I have mixed feelings about this site.

After quickly looking around, I was able to identify plenty of books/shows/movies that are not mentioned at all. And those that are mentioned are given only quite brief articles. When you compare the coverage to what Wikipedia has, this new site looks rather small. When you also think about how much material there is in Memory Alpha, Wookiepedia, and all the other franchise-specific wikis, then this new site seems positively embarrassingly small.

However after reading a few articles, I think it does bring something new. In particular, the essays are not the factual NPOV articles that Wikipedia strives for. They are in fact highly opinionated about the quality and historic impact of various parts of SF. While I didn't agree with all the entries, they seemed mostly well-researched, and had lots of historical information and pointed out other works were given themes had also been explored.

My point is that this site gives us a different perspective. The essays and opinion pieces should be interesting to most anyone interested in SF. However I think calling it "The Encyclopedia of Sci-Fi" is a mistake. "Encyclopedia", in the modern Internet age, implies detailed coverage, in both breadth and depth; this site provides neither, from what I can see. Rather than advertising it as an authoritative factual cataloging of every SF work ever produced (which, again, is what "encyclopedia" means to most people nowadays, for better or worse), they should be emphasizing that they are providing an assortment of opinion pieces about the history of SF, written by selected experts.

Comment Re:Asking people to pay for what they use?!? OMG! (Score 5, Insightful) 397

It's not so much a moral panic, but usage-based billing is seen as bad because:

1. It's not inline with the operating costs. For gas or electricity, the more you use the more of the resource is used up. Hence, it just makes sense to pay for usage. With bandwidth, it's not exactly the same. There is a large base cost to having a given infrastructure; the additional cost to actually use the infrastructure is comparatively small (routers and switches transferring packets do consume a bit more electricity than routers and switches idling... but this is small compared to the base cost of installing and maintaining the routers and switches at all). In general, people find it unfair for consumer costs to be highly unrelated to actual production costs (it feels arbitrary and like price gouging).

2. Related to #1, it's just generally inefficient not to use data-transmission infrastructure at near 100% capacity. Once the infrastructure is in place, it's cheap to just use it. Thus, it's overall more efficient (in terms of productivity per amount of resource used) to encourage people to use the Internet to capacity. Usage-based billing has the opposite incentive: it encourages people to ration what is in not a traditional resource. (Unused bandwidth is wasted, not banked for a rainy day.)

3. In an overall technological/economic trend sense, usage-based billing has the effect of keeping society locked into a fixed data-transmission infrastructure. The incentive to expand and improve the network, add bandwidth and capacity, is eliminated. Thus progress in telecommunications is stalled. Most people would agree that the deployment of telephones and the rapid expansion of the Internet have been overall beneficial to our economy and technological progress. Thus, it seems like continuing to expand our communications infrastructure would be a good thing. Usage-based billing maintains the status quo instead of encouraging expansion of our networks.

4. As others have pointed out, to the consumer, data bandwidth is more like cable TV or landline telephones: both of which have traditionally been a "pay per month; unlimited usage" model (with many exceptions, of course: long-distance calling, pay-per-view, premium content, ...). So there is at least precedent for similar consumer services being metered on an "access time-period" basis and not a usage basis.

Why is Internet use seen differently?

I think the short answer is: "Because it's different." Bandwidth is not a tangible resource like gas or food. Treating it as one is not efficient.

Comment Re:Bogus study (Score 1) 357

Fair enough.

However, 'garbage' is somewhat subjective. Some people prefer to pay top dollar for something that is robust and will last. Others prefer to pay less and get something less robust and more prone to failure. There are extreme cases (lemons that have no right to be sold), but even in a rational well-informed market, there is a place for 'inferior' products. For instance, for people who know they will replace their handset very frequently (for other reasons), it may make more economic sense to buy a series of cheap phones. Some people know they are clumsy, and know that they break things no matter how well-built they are, and so opt for the cheap-to-replace option (even though it breaks somewhat more rapidly, it can still be cheaper in the long run). Some companies are buying phones to be used in the field or situations where damage and theft are routine, so cheaper phones make more sense. And so on...

So, there are some good reasons why it's nice to have a spectrum of options in terms of quality. In the end, Apple and BlackBerry only offer higher-end phones, so the average 'quality' is decent. For Android phones, there is a wider spectrum, and so of course the average quality is lower.

My point is that this isn't necessarily a failing on the part of Google. They are allowing the consumer a wider range of choices. That's good, in some senses at least. (The downside, of course, is brand tarnishing: you can't rely on the 'Android' moniker to mean the hardware is quality. This means that you have to pay more attention and do more research when buying an Android phone as compared to when buying an iPhone. But that's life: the tradeoff to having more choice is having to make more decisions.)

Comment Re:scan, edge detect, match (Score 2) 209

Yeah well there's a difference between theory and practice.

Actually many of the great successes of AI (and even then some would debate how great they've been) are simple-sounding in principle but tough to get right. Things like route planning (just start a directed random walk from the start and finish and explore the graph until they connect to each other), web search (just weight results by popularity/links), document search (just show anything with a partial match), OCR (just threshold the image and match pixels to a database of font characters), voice recognition (just break it up into phonemes and look it up in a pronunciation dictionary), voice synthesis (just pre-record some phonemes and stitch them together), image recognition (just tag a bunch of images and train a neural net), and so on.

They all sound simple enough. But for an actual implementation to be successful, there are tons of pitfalls and gotchas and real-world ambiguities that need to be figured out. There's then whole other layer of tweaking to get a reasonable idea to run in a reasonable amount of time: many problems can be brute-forced but people typically don't want to wait forever for the answer, so ingenious algorithms for pruning the search tree or efficiently exploring the parameter space have to be designed.

Point being, don't assume this is as easy as it sounds. If it were, then we wouldn't even be discussing it (and no one would bother using shredders).

Comment Re:It's not a bad thing (Score 3, Interesting) 219

I agree, but in such cases, isn't the solution to make current "fun" languages more "enterprisey" by improving the back-end toolchain? Disclaimer: I'm by no means an expert (I'm a physicist with a minor in CS, not a hardcore CS person), so maybe I'm way off-base here (corrections welcome).

Take Python. I love its syntax, the plethora of libraries available, the ability to rapidly prototype and see immediate results. All the things that make it "fun" really do make it productive (shorter time to a final, correctly coded solution). It's a great language. However, it doesn't run as fast as C/C++ code, for obvious reasons (interpreted, dynamic typing, etc.). There are ways to make it faster (rewriting critical subsections in C/C++, using fast libraries intelligently, various optimizers and pseudo-compilers exist, etc.). But everyone (or at least me) would love to code using Python syntax but have the code run as fast as C/C++. Best of both worlds.

In other words, what I would love to see is tons of effort put into making toolchains for making Python (or other "fun" languages) faster (and probably by association more enterprisey in terms of being type-safe, etc.). I'm not saying doing this would be easy, but there are various proofs-of-principle for compiling Python code or automatically converting it to C/C++ code and whatnot. It could be done and would allow programmers to use the clean syntax of Python to more rapidly code a project without feeling bad about the performance not being up to scratch.

Again, I'm aware of the alternatives (rewrite bottlenecks in a fast external, etc.). But it seems to me that we've learned a lot about what makes for a nice high-level syntax, so we should automate the grunt-work of converting that syntax into fast low-level code. (Yes, I'm aware of gotchas such as dynamic typing preventing full compiling in some cases, but something like adding type hints to a high-level language would surely be less onerous for programmers than going to a lower abstraction level wholesale. Even type hints could be automatically inferred by a parser in a lot of cases, with a programmer checking that they make sense...)

Comment Re:Why don't the nutters think THIS is faked? (Score 4, Informative) 89

I wonder how good a telescope we would need to actually see a human being on the surface of the moon anyway?

It would have to be very good. For example, the Hubble space telescope couldn't do it. Not even close. (Despite the fact that it can image galaxies that are billions of light-years away.)

Let's say that seeing an astronaut convincingly requires a resolution of ~5 cm (at that resolution, their hand would be a bit of a blob, but at least you'd be able to tell that it was a person and not a rover...). Let's assume we're using the violet-end of the visible spectrum (wavelength lambda ~ 400 nm). Using the resolution equation:
sin(theta) = 1.22 * lambda/D

theta is the angular difference we're interested in, D is the size of the aperture/optical system, the 1.22 factor can vary a bit between optical schemes but is close enough for our purposes. The distance to the moon is 384,000 km, so the angle theta is arctan(5 cm/384000 km) = 7.5E-9 degrees. So:
D = (1.22 * 400 nm)/( sin(7.5E-9 degrees) ) = 3.7 km

So, we would need an optical telescope with an aperture/mirror that is 3.7 km in diameter. Needless to say, this is quite a bit bigger than any telescope that exists today (the best is about 12 m). If you want to be able to accurately see the astronaut's eyes, to confirm that he's really not a robot, then the telescope would have to be even bigger (like 40 km in diameter).

Comment Re:Isn't the problem c? (Score 3, Informative) 412

c isn't just the speed of light. It's a constant that appears in all kinds of equations: sometimes as the speed of light, sometimes as the permeability of vacuum (Maxwell equations, etc.), sometimes as the ratio between matter and energy (E=mc^2), sometimes as the fundamental ratio between space-like and time-like quantities (relativity, etc.), and so on. It's quite amazing that this same constant comes out with the same value in all these different ways. (And, again, we can measure this constant in totally different experiments and come up with the same value.) This points to a fundamental symmetry in our universe, a realization which gave rise to relativity, quantum physics, and so on.

In short, you shouldn't think of it as merely being the speed that light (or any other particle) travels. It's a fundamental value that is deeply entrenched in just about every branch of physics you can think of. It so happens that it's also the speed that photons travel at. (That's, no accident, of course.) Changing the value of c even slightly would propagate through all of our physics equations, and would lead to totally different predictions for a host of results. (More specifically, we would start getting the wrong predictions for many things!)

So the explanation for this new result must be something rather more subtle than just adjusting c.

Comment Re:Isn't the problem c? (Score 3, Interesting) 412

It's not so simple. We've measured the speed of light to great precision. We know what that speed is, and we know photons are massless, so we know with very high confidence what the speed of massless particles is. If neutrinos travel faster than light, then this is very surprising and points to something new and interesting. I'm avoiding referring to 'c' because it would be ambiguous: in traditional relativity, the constant speed of light is equal to the maximum possible speed, which is also in essence the ratio between space-like and time-like variables in the theory (the slope of light-cones and all that). It's a constant that reappears over and over again, and marvelously it's precisely equal to the speed of light. It can't be as simple as just "we were wrong, c is a bit higher than we thought" because it would immediately mean that "c" isn't as universal as we thought: the symmetry of the universe must be somehow different so that photons and neutrinos (and probably other particles) follow slightly different rules.

But if this result is indeed true, and neutrinos travel faster than light, then this is truly amazing and could mean different things. One possibility is that different particles actually have different 'speed limits' (and different causal cones), so there is c_light, c_neutrinos, etc. There are many other possibilities (extra dimensions, breaking of Lorentz invariance, imaginary mass, closed timelike curves, etc.). All of them amount to a substantial rethinking to some aspect of physics. This is definitely exciting, since it could be telling us something very new! And it won't be as simple as just adjusting a constant a bit. (If we tweak the value of "c" in our equations even just a bit, all kinds of well-tested observations, in everything from cosmology to the functioning of transistors, would come out wrong...).

Lastly, it's worth keeping in mind that it's probably a subtle experimental error (very subtle!). This is still useful, because it will teach us something new about experiment design and possibly even teach us something about particle physics. For instance, the timing calculation is based on certain models of the packet of neutrinos that are generated. But, it could be that the packet that arrives at the end is slightly different than the one sent out at the beginning, thus altering the way one should compute the flight time. This could point to some interesting, previously unknown, ways in which neutrinos are generated, or interact with matter, or interact with each other. In any case it will be interesting.

Comment Re:Loads of cable ties! (Score 4, Interesting) 374

My solution is to use a whole bunch of solutions:
- Instead of cable ties, we mostly strips of double-sided velcro. It's faster to reconfigure. (Hint: Buy "Velcro Plant Ties" instead of cable ties... it's the exact same stuff but much cheaper.)
- Also use cable ties and twist-ties liberally.
- CableDrop (or similar) when you want to hold a cable in position but be able to move/remove it frequently.
- AnthroCart cable management accessories. They are optimized to work with their line of desks, but some of the accessories are just generally useful for group cables.
- Medium-length runs of multiple cables can be grouped together using a split tube (e.g. this). Ikea used to sell some dirt-cheap split-tube for cable management, but I can't find it anymore (they do have these, though).
- For some runs, braided sleeving (or even just solid PVC tubing from any hardware store) can be useful. You can unplug all the cables from both ends, and move it as a unit to a new span.

So I guess my advice is to have a mixture of solutions on-hand. For any given task, use the one that feels right!

Comment Re:What's So Expensive? (Score 3, Interesting) 35

So how much more expensive is the second, "smoothing" phase than the original production phase?

It's another wet-chemistry phase. It's no more expensive than the first synthesis step. But each step of course adds to costs (in terms of manpower, chemicals needed, etc.).

Similarly, adding the lipid layer is just a a ligand exchange: you mix the quantum dots with ligand in the right solvent mixture and they become coated. Simple in principle, not too complicated in practice, but it adds another step to the process.

how much do the products of each of those phases currently cost

Quantum dots are fairly expensive, but they are similar in cost to speciality chemicals that don't have industrial uses and thus don't benefit from economies of scale. Some examples from companies that currently sell quantum dots:
Invitrogen 4 ml of 1 micro-molar QD solution (~15 mg of qdot solids) for $335 ~ $22 million/kg
Sigma-Aldrich CdSe QD, 5mg/mL, 10mL solution for $399 ~ $8 million / kg
SpectrEcology 50 mg CdSe/ZnS QDs for $449.00 ~ $9 million / kg

For comparison, ubiquitous chemicals like gasoline are ~$1/kg, common chemicals like acetone (reagent grade) are ~$30/kg, high-purity semi-rare materials (e.g. pure selenium) are ~$1,000/kg, and speciality chemicals (for which there is no industrial need) are typically $100-$1,000 for a 500 mg quantity, which means $1 million / kg. As you can see, it is much more expensive to synthesise a speciality chemical (basically requires a trained chemist to manually do a small-scale lab synthesis for each batch), as compared to industrial-scale manufacturing.

There's no doubt that quantum dots could be made more cheaply if there were a real need for them. There are huge challenges in terms of how to scale-up the synthesis, but nothing that couldn't be addressed with clever chemical engineering and automation.

Comment Re:What's So Expensive? (Score 4, Informative) 35

That video showed a lot of mixing, boiling, separation. None of it looked very expensive.

It's true that it's all so-called "wet chemistry" which is fairly simple. However there are many things that make these kinds of syntheses more difficult and complicated (and thus expensive) than other kinds. First of all, you'll notice how careful she had to be about allowing the reagents to get into contact with air. This is because many chemicals that are in air (especially oxygen and water) will kill the reaction. So you have to prepare reagents in an argon-filled glovebox, transfer reagents carefully into an argon-filled reaction flask, etc. Also note that to get good size uniformity, you need rather pure reagents, and you need to mix the reagents as homogeneously as possible (this is why she injects using two small syringes rather than one large syringe: it makes the addition faster and thus all the nanoparticles nucleate and grow at the same time and rate).

Now think about scaling this up to an industrial process. Most chemical plants don't have to worry too much about oxygen or moisture contamination (some of them do, and, of course, they are more expensive to build, operate, and repair). Also the whole 'rapid addition and homogeneous mixing' aspect inherently limits the ability to scale-up, which makes it harder to achieve industrial economies. And of course the ultra-pure reagents are more expensive.

Having said all that, like anything else if there is a pressing need for the material, industrial engineers will find clever ways to produce the material more and more cheaply and efficiently. (Microchips are horrendously complex to manufacture and yet are now remarkably cheap.) So I don't think this is an insurmountable problem... but it is more complicated, and thus expensive, than traditional chemical syntheses. (Actually there are various companies right now that will sell quantum dots of various sizes and kinds. They are mostly intended for use in research, thus are still fairly expensive, but it shows that there is already an industry developing around these materials.)

Slashdot Top Deals

Where there's a will, there's a relative.