Become a fan of Slashdot on Facebook


Forgot your password?
Slashdot Deals: Prep for the CompTIA A+ certification exam. Save 95% on the CompTIA IT Certification Bundle ×

Comment Re:Programmed behaviour is programmed behaviour. (Score 5, Interesting) 406

If it can't make it's way through a junction where the drivers are following the rules, that's bad programming. If it can't make it's way through a junction where other drivers don't come to a complete halt for it, it's not fit to be on the road with other drivers.

The problem is that people don't follow rules. We follow approximations of the rules. For instance, my driver's handbook described the correct way to deal with yielding at a four-way stop as "yield to the person on the right." For a computer, that's an obvious deadlock situation, or worse - an obvious mistake. If four cars are parked at a four way stop, and each car yields to the car on the right, then (a) a situation could occur where no one goes anywhere, and (b) if the individual cars only pay attention to the person on the right, then they could hit an on-coming car turning left, or the car on the left turning left. People process the "yield to the person on the right" rule into something much more complex.

People use a number of complex behaviours at four-way stops. Firstly, the wave of the hand, or the nod of the head to indicate that you yield to the other driver is an important signal. Secondly, in my jurisdiction, 90% of the four way stops are done on a first-come first-served basis. Lastly, and this is the bit I don't understand, often people yield to the person on the left. The actual system of navigating a four-way stop is much more complex than what an initial computer implementation might be.

Comment Re:Pay more, get more (Score 3, Insightful) 152

The article specifically states that rank in sponsored links correlates to advertising spend, which I would expect.

I would also expect a weaker correlation between page rank and advertising spend in the normal links. Firstly, a site with significant advertising spend will hopefully generate more hits, and this should translate into page-rank. Secondly, I would expect a site with significant advertising spend to spend more on its site, which hopefully results in a more informative and more useful site. In turn, this should result in a weak correlation between advertising spend and page rank. Lastly, some correlation probably exists between advertising spending, and hits from the google search spider. This may translate into improved page rank for trending topics.

In all, I would be surprised if there were not correlations between advertising spend and Google page rank. What I do like from Google is that they clearly label the sponsored versus non-sponsored links. Also, Google also has a number of non-commercial sites at the top of many search suggestions, which indicates that they treat sites without advertising spend reasonably.

Comment Re:From TFA: bit-exact or not? (Score 3, Interesting) 172

The grandparent poster is talking about compressing videos. If something is known about the data being encoded, then it is trivial to show that you can exceed the performance of arithmetic coding, because arithmetic coding makes no assumptions about the underlying message.

For instance, suppose I was encoding short sequences of positions that are random integer multiples of pi. Expressed as decimal or binary numbers, the message will seem highly random, because of the multiplication by an irrational number (pi). However, if I can back out the randomness introduced by pi, then the compression of the resulting algorithm can be huge.

The same applies to video. If it is possible to bring more knowledge of the problem domain to the application, then it is possible to do better on encoding. Especially with real-life video, there are endless cheats to optimize compression. Also, Dropbox may not be limited by real-time encoding. Drop-box might not even need intermediate frames to deal with fast-forward and out-of-order viewing. Dropbox may be solely interested in creating an exact image of the original file. Knowing the application affects compression dramatically.

Lastly, application specific cheats can save real-world companies and individuals money and time. Practical improvements count as advancements too.

Comment Re:Traditional internal facing IT shop .. (Score 2) 198

They could be doing a Citrix-like thing where everyone is logging on to server-housed remote instances. If each instance was one VMWare VM, then 3,000 employees and 1500 VMWare instances makes sense.

They could also be a company where each corporate customer, or groups of individual customers, require their own virtual server. For example: an SaaS accounting firm similar to FreshBooks may have a separate virtual server for every corporate customer, or Blizzard where groups of users have their own dungeon.

Otherwise, even for a IT company, they have the IT infrastructure from hell. I can't imagine 1500 different server applications in a company of 3,000 people.

Comment Re:Use RTGs for ion propulsion then comm. (Score 1, Informative) 77

RTGs are being phased out because (a) the probes need more power than ever with modern computers, and (b) because of environmental concerns. Unfortunately, most of the environmental concerns revolve around the word "Plutonium" and the much more dangerous Plutonium-239.

The best RTGs use a chemically locked Plutonium-238 Oxide that is probably one of the safest fuel sources ever invented. The stuff is non-reactive ceramic that is almost indestructible, and is readily rejected by the human body if ingested. It's not even particularly radioactive, as radio-active isotopes go, because to make the RTG last for a long period of time, it is necessary to use an isotope with a sufficiently long-lived half-life characteristic. Plutonium-238 Oxide is the polar opposite of the more typical dangers from Plutonium-239 that everyone worries about.

See RTG generators and plutonium oxide for more information.

Comment Re:They've only just discovered this? (Score 1) 73

On error, dynamic_cast returns NULL for pointers, and an exception for references. From cppreference:

If the cast is successful, dynamic_cast returns a value of type new_type. If the cast fails and new_type is a pointer type, it returns a null pointer of that type. If the cast fails and new_type is a reference type, it throws an exception that matches a handler of type std::bad_cast.

Comment Self-Checking Code (Score 3, Insightful) 285

I gave up on the concept that I would be able to write and debug programs correctly the first time. Now all the central data structures in any long-lived control system get error-checking code added to them. For example, the sorted-list code is built with a checker to ensure it stays in order. The communications code gets error-checking. The PID controllers get min/max testing, etc.

Every once in a while I come across a bugs that are not in the source code. Often they are compiler errors. Sometimes the bugs involve a rare C/C++ or operating system eccentricity. Sometimes the errors are caused by obscure library changes. Sometimes they are hardware errors.

Especially with the embedded micro-controllers, I leave the consistency checking code in, because you just can't assume the everything always works. The nature of software bugs change with time, and it is not always in the way a programmer would expect. I am frequently surprised by how obscure some of the bugs are.

Comment Re:Core considerations (Score 1) 150

Per core or Per CPU software pricing can dominate the cost calculation. We have a CFD application, and we were considering boosting the hardware. One look at the software costs discouraged us.

A costly complication is that 3-D CFD (or FEA), is an O(n^3) problem. Doubling the mesh density means 2^3=8 times the CPU time. An increase of 10 times in the mesh density requires 10^3=1000 times the CPU time. If you are pushing the extreme, small changes in the mesh density have significant cost impacts.

It makes me wonder how many research groups are paying full-cost for the commercial CFD packages. Many universities, some quasi-government labs, and many small startups will not have the money for the full-price commercial packages.

Comment Re: How much you got? (Score 1) 184

MariaDB. Google switched from MySQL to Oracle to MySQL to Google F1 for it's AdWords technology. See the wiki page on Adwords. Since then, many companies, including Google have switched their smaller MySQL databases to MariaDB.

There is an interesting account of the Google Oracle migration at the wayback machine.

Comment Get a trademark (Score 1) 108

Get a trademark on the domain name ending in .com. No one else will trademark a domain name they don't own. When someone comes around to sue you, sue them back for trademark infringement. I think there have been a few cases where Walmart has tried to eliminate pre-existing trademark owners. Walmart lost, and all the cases ended in settlements.

Comment Re:Kids don't understand sparse arrays (Score 3, Informative) 128

Sparse arrays is a mathematical abstraction that completely ignores the implementation details. Formally, they are any matrix that has "many" zeros (or null) values. The practical problem is that most useful optimizations around sparse arrays require closely matching implementation details against the problem to be solved. With sparse arrays, implementation details are killers.

For instance, suppose the standard solution is adopted. The sparse array will be organized as an array of linked lists representing the rows, with each row containing another linked list that contains the individual data values. What happens if you want to do a matrix multiply? A matrix multiply requires a column by row lookup and a row by column lookup. One will be an O(1) lookup, and the other will be a O(n^2) lookup. This makes a full matrix multiply an O(n^5) operation, and memory is the least of everyone's worries.

To optimize the code, it is necessary to look closely at how the matrix will be built and used. However, as soon as that starts happening, the matrix multiply decomposes into a bunch of specialized matrix operations. At this point, the abstraction starts falling apart.

For example:
a) Assume the multiplication involves a diagonal matrix. Then the optimum solution is to store the diagonal matrix as a 1xn matrix, and specialize the matrix code. This was the favoured approach from numerical methods in C and Fortran.
b) Assume the multiplication involves a tridiagonal matrix. Then the optimum solution is to store the tridiagonal matrix as a 3xn matrix, and specialize the matrix code. Again, see numerical methods in C and Fortran, or just about any good matrix library.
c) Assume the matrix operation involves a "control-systems" style matrix. One populated row, followed by a diagonal series of rows with one or two elements. The optimum solution is to develop specialized code. For most control systems problems, this matrix never changes.
d) Control systems often have a compact matrix representation involving a series of matrix multiplies. However, if the matrix multiplies are analysed, they become a much simpler sequence of equations that can often be executed in O(n^2) time instead of the longer O(n^3) time of the matrix multiplies. As such, develop specialized code. Both MatLab and Mathematica have functions where numerical operations can be broken down into there constituent formulas and saved as "C" code.
e) Assume we really need to frequently multiply a truly sparse array. Then build two sets of linked lists, one organized by row/column and another organized by column/row. Then both the row and column lookups can be done as an O(1) operation. The matrix multiply is a O(n^3) operation.
f) Just because the inputs to a matrix operation are sparse, doesn't mean the output array is sparse. I'm thinking of Singular Value Decomposition, some matrix multiplies, matrix inverses, matrix pseudo-inverses, and covariance matrices. Also, some matrices that appear in Quantum physics. In this case, matrix operations need to be further specialized to deal with creating non-sparse matrices from sparse-matrices. Additionally, some matrices may need to be rounded to sparse, even though they may be fully populated, like some covariance matrices.

In the end, sparse matrices are simply a descriptive term for a bunch of application-specific optimizations. Sparse matrices devolve into numerical optimizations that no-one cares about unless they are looking at an application that requires the specific numerical optimization. I'm not surprised high-school CS coders don't "understand" them.

Comment Re:Social Media Outage (Score 3, Interesting) 371

Unfortunately, that doesn't stop people. All they need to do is create a fake Facebook profile. The scam is:

1. Acquire targets name, some basic information.
2. Create Facebook profile.
3. Post some cat pictures, get friends.
4. Run a scam / Post defamatory post
5. ***
6. Profit / Watch target get fired

Non-participation in social networks is no protection.

Software production is assumed to be a line function, but it is run like a staff function. -- Paul Licker