Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
User Journal

Journal Journal: What constitutes a good hash anyway? 3

In light of the NIST complaint that there are so many applicants for their cryptographic hash challenge that a good evaluation cannot be given, I am curious as to whether they have adequately defined the challenge in the first place. If the criteria are too loose, then of course they will get entries that are unsuitable. However, the number of hashes entered do not seem to be significantly more than the number of encryption modes entered in the encryption mode challenge. If this is impossible for them to evaluate well, then maybe that was also, in which case maybe we should take their recommendations over encryption modes with a pinch of salt. If, however, they are confident in the security and performance of their encryption mode selections, what is their real objection in the hashing challenge case?

But another question one must ask is why there are so many applicants for this, when NESSIE (the European version of this challenge) managed just one? Has the mathematics become suddenly easier? Was this challenge better-promoted? (In which case, why did Slashdot only mention it on the day it closed?) Were the Europeans' criteria that much tougher to meet? If so, why did NIST loosen the requirements so much that they were overwhelmed?

These questions, and others, look doomed to not be seriously answered. However, we can take a stab at the criteria and evaluation problem. A strong cryptographic hash must have certain mathematical properties. For example, the distance between any two distinct inputs must be unconnected to the distance between the corresponding outputs. Otherwise, knowing the output for a known input and the output for an unknown input will tell you something about the unknown input, which you don't want. If you have a large enough number of inputs and plot the distance of inputs in relation to the distance in outputs, you should get a completely random scatter-plot. Also, if you take a large enough number of inputs at fixed intervals, the distance between the corresponding outputs should be a uniform distribution. Since you can't reasonably test 2^512 inputs, you can only apply statistical tests on a reasonable subset and see if the probability that you have the expected patterns is within your desired limits. These two tests can be done automatically. Any hash that exhibits a skew that could expose information can then be rejected equally automatically.

This is a trivial example. There will be other tests that can also be applied automatically that can weed out the more obviously flawed hashing algorithms. But this raises an important question. If you can filter out the more problematic entries automatically, why does NIST have a problem with the number of entries per-se? They might legitimately have a problem with the number of GOOD entries, but even then all they need to do is have multiple levels of acceptance and an additional round or two. eg: At the end of human analysis round 2, NIST might qualify all hashes that are successful at that level as "sensitive-grade" with respect to FIPS compliance, so that people can actually start using them, then have a round 3 which produces a pool of 3-4 hashes that are "classified-grade" and a final round to produce the "definitive SHA-3". By adding more rounds, it takes longer, but by producing lower-grade certifications, the extra time needed to perform a thorough cryptanalysis isn't going to impede those who actually use such functions.

(Yes, it means vendors will need to support more functions. Cry me a river. At the current scale of ICs, you can put one hell of a lot of hash functions onto one chip, and have one hell of a lot of instances of each. Software implementations are just as flexible, with many libraries supporting a huge range. Yes, validating will be more expensive, but it won't take any longer if the implementations are orthogonal, as they won't interact. If you can prove that, then one function or a hundred will take about the same time to validate to accepted standards. If the implementations are correctly designed and documented, then proving the design against the theory and then the implementation against the design should be relatively cheap. It's crappy programming styles that make validation expensive, and if you make crappy programming too expensive for commercial vendors, I can't see there being any problems for anyone other than cheap-minded PHBs - and they deserve to have problems.)

User Journal

Journal Journal: Nintendo Announces DSi and Wii storage solution

Earlier this morning, Nintendo made several major announcements in a press conference in Japan. Ranging from a new Nintendo DS to a Wii storage solution. Nintendo's first announcement was a brand-new handheld in the Nintendo DS line of consoles. This revision of the DS brand will be a significant break from the previous DS Lite console. It will be named "Nintendo DSi". (Nintendo DS-Eye, get it?) Nintendo also announced a solution to the Wii storage problem. Unfortunately, it sounds like players will be able to download to their SD Card, but not actually play games directly from the card.

Firehose Link: http://slashdot.org/firehose.pl?op=view&id=1225579

(Still trying to figure out if the firehose does anything.)

User Journal

Journal Journal: Beowulf MMORGs 3

Found this interesting site, which is focussing on developing grid computing systems for gaming. The software they seem to be using is a mix of closed and open source.

This could be an important break for Linux, as most of the open source software being written is Linux compatible, and gaming has been the biggest problem area. The ability to play very high-end games - MMORGs, distributed simulators, wide-area FPS, and so on, could transform Linux in the gaming market from being seen as a throwback to the 1980s (as unfair as that is) to being considered world-class.

(Windows machines don't play nearly so nicely with grid computing, so it follows that it will take longer for Microsoft and Microsoft-allied vendors to catch up to the potential. That is time Linux enthusiasts can use to get a head-start and to set the pace.)

The question that interests me is - will they? Will Linux coders use this opportunity of big University research teams and big vendor interest to leapfrog the existing markets completely and go straight for the market after? Or will this be seen as not worth the time, the same way that a lot of potentially exciting projects have petered out (eg: Open Library, Berlin/Fresco, KGI, OpenMOSIX)?

User Journal

Journal Journal: The Lost Tapes of Delia Derbyshire

Two hundred and sixty seven tapes of previously unheard electronic music by Delia Derbyshire have been found and are being cataloged.

For those unfamiliar with Delia Derbyshire, she was one of the top pioneers of electronic music in the 1950s and 1960s. One of her best-known pieces was the original theme tune to Doctor Who. According to Wikipedia, "much of the Doctor Who theme was constructed by recording the individual notes from electronic sources one by one onto magnetic tape, cutting the tape with a razor blade to get individual notes on little pieces of tape a few centimetres long and sticking all the pieces of tape back together one by one to make up the tune".

Included in the finds was a piece of dance music recorded in the mid 60s, examined by contemporary artists, revealed that it would be considered better-quality mainstream today. Another piece was incidental music for a production of Hamlet.

The majority of her music mixed wholly electronic sounds, from a sophisticated set of tone generators and modulators, and electronically-altered natural sounds, such as could be made from gourds, lampshades and voices.

User Journal

Journal Journal: Well, this is irritating. 3

Someone has trawled through YouTube and flagged not only the episodes of The Tripods, but also all fan productions, fan cine footage and fan photography of the series. How so, can't you buy it on DVD? Only the first season, the second exists only in pirated form at scifi conventions, and of course the fan material doesn't exist elsewhere at all. The third season, of course, was never made, as the BBC had a frothing xenophobic hatred of science fiction at the time. (So why they made a dalek their general director at about that time, I will never know...)

What makes this exceptionally annoying is that the vast bulk of British scifi has been destroyed by the companies that produced it, the vast bulk of the remainder has never seen the light of day since broadcast, and the vast bulk of what has been released has been either tampered with or damaged in some other way, often (it turns out later) very deliberately, sometimes (again it turns out later) for the purpose of distressing the potential audience.

I've nothing against companies enforcing their rights, but when those companies are acting in a cruel and vindictive fashion towards the audience (such as John Nathan Turner's FUD of audiences being too stupid to know what they like, or too braindead to remember what they have liked), and the audiences vote with their feet, on what possible grounds can it be considered justified for those companies to (a) chain the audience to the ground, and (b) then use the immobility of the audience to rationalize and excuse the abuse by claiming the audience isn't going anywhere?

I put it to the Slashdot Court of Human/Cyborg Rights that scifi fans are entitled to a better, saner, civilized explanation, and that whilst two wrongs can never make a right, one wrong is never better.

User Journal

Journal Journal: 1nm transistors on graphene

Well, it now appears the University of Manchester in England has built 1nm transistors on graphene. The article is short on details, but it appears to be a ring of carbon atoms surrounding a quantum dot, where the quantum dot is not used for quantum computing or quantum states but rather for regulating the electrical properties. This is still a long way from building a practical IC using graphene. It is, however, a critical step forward. The article mentions other bizare behaviours of graphene but does not go into much detail. This is the smallest transistor produced to date.
PC Games (Games)

Journal Journal: Scientific and Academic Open Source - Hotspots, Black Holes

One of the most fascinating things I've observed in searching for Open Source projects available for whatver I'm doing at the time is the huge disparity of what is available, how it is used and who is interested.

An obvious place to start is in the field of electronics. Computer-based tools are already used to build such stuff, so it's a natural replacement, right? Well, almost. There are tools for handling VHDL, Verilog and SystemC. There are frameworks for simulating both clock-based and asychronous circuits. You can do SPICE simulations, draw circuit diagrams, download existing circuits as starting points or places of inspiration, simulate waveforms, determine coverage and design PCBs. OpenCores provides a lot of fascinating already-generated systems, SUN provides the staggering T1 and T2 UltraSPARC cores, and the Sirocco 64-bit SPARC. This field has probably not got anywhere near what it needs, but it has a lot.

Maths is another obvious area. Plenty of Open Source tools for graphing, higher order logic, theorum provers, linear algebra, eigenvalues, eigenvectors, signal processing, multiple-precision, numerical methods, solvers for all kinds of other specific problem types, etc.

What about astronomy? That requires massive table data crunching, correlation of variations, moving telescopes around with absolute precision - things computers tend to be very good at. There are a few. Programs for capturing images are probably the most common, although some telescopes provide software for controlling telescopes, obtaining data and performing basic operations. Mind you, how much more than this does one need in software? Some things are better done in hardware (for now, at least) because the software hasn't the speed. Yes, the control software seems a little specialized, but it'd be hard to make something like that general-purpose.

Chemistry. Hmmm. Lots of trivial stuff, more educational than valuable - periodic tables, 3D models of molecules, LaTeX formatting aids. There's a fair amount on the study of crystals and crystallography, which is as much chemistry as it is physics, but there's not a lot else. Chemistry involves a lot of tables (which would be ideal for a standardized database), a lot of mathematical equations, formulae, graphing, measuring and correlating all sorts of data, the consequences of different filtering and separation techniques, the wavelength and intensity of energies, analysis of the results of atomic mass spectrometry or other noisy data, etc. I see the underlying tools for doing some (but not all) of these things, but I don't see the heavy lifting.

Archaeology has very few non-trivial tools. Some signal processing for ground-penetrating RADAR, but there are virtually no tools out there that could be useful for helping with interpretation. In fact, most RADAR programs don't interpret either but display the result on a small LCD screen. Nor do any tools exist for correlating interpretations (other than manually via an extremely naive - for this purpose - GIS database). There's a few scraps here and there, but signal analysis and GIS seem to be about it, and those were mostly developed for mining companies and tend to show it.

Biology has plenty of DNA sequencing code. By now, Slashdotter should be able so sequence eith own DNA, not pay someone a thousand to do it. You mean, those aren't enough, that you need more hardware? And a lot more software? It's an important step, but it's not unique.

Mechanical Engineering. I haven't seen anything of any significance.

Geology. Not really, beyond the same software for Archaeology, but using it for find seams in rock.

Psychology: Nada.

Psychiatry: None.

Sports: Lots of software getting used, but little of it is open source.

Result - those who gain with the least to lose and the most to win make the change. Those who feel like there's no benefit from changing what they're doing will continue doing what they're doing. My suggestion? There are gaping holes in Open Source. Fill them in.

User Journal

Journal Journal: Open Source Archaeology

This is an interesting (to me) piece of work that I've been asked to do. Using open-source software to analyze data from both ground-penetrating radar and magnetometers, open-source GIS software for tracking archaeological finds, open-source modeling software to produce archaeologically and technically sound reconstructions, and then use a mix of open-source virtual reality software and open-source web technology to provide both the raw and the visually interpreted data in a form that is of practical use to experts and non-experts alike.

If that sounds like a complex task, it is. The site is extremely convoluted, there is a wealth of data that is currently in a highly unusable form, and what is meaningful to an expert is not necessarily the least bit useful or usable to a non-expert (and vice versa). Currently, there is a lot of skepticism by The Powers That Be that such a project would even be possible. My first task, then, is to produce an example. My impossible mission is to convert the few scraps of information published on medieval aisled halls, along with the very limited archaeological finds from the site in question, into the dual format of raw information and virtual reality.

On the one hand, the limited information means that the first part is relatively easy. An online archaeological GIS-enabled database may not be trivial, but all the software needed can - at least - be found on Freshmeat and the amount of data entry is relatively small. The second part is tougher. Again, open-source VR software does exist, but it is one thing to enter known values that can be verified into a database, it is entirely another to derive values that are implied and logically required but for which there is no direct evidence at all.

There is a catch. Virtual reality is great for producing models you can walk through, but it's generally pretty lousy at telling you if said model violates the laws of physics. Given that I can hardly build my own medieval aisled hall, I know of no other method besides hand-cranking through the numbers for validating the predicted structure. Suggestions would be extremely welcome, as would any idea on how I could either use the open-source approach for the hall design, or how I could use something like BOINC to automate the validation of a virtual landscape.

Technically, this is fun - I'm getting to do some reasonably original work - but original work is necessarily far more demanding in terms of research and application than run-of-the-mill work. Mind you, I only have myself to blame - the archaeologists have been satisfied so far with producing a web-based diary of major finds, plus entering the data on a completely unusable regional database. Such are the hazards of pointing out that you can do better! :)

User Journal

Journal Journal: Access Forbidden

Who said you could read this anyway? What's your problem, Jack?

User Journal

Journal Journal: How I Learned Philosophy

From the "Paying People to Argue With You" thread.

http://slashdot.org/article.pl?sid=07/11/05/1353215

How I Learned Philosophy (Score:5, Insightful)
by severoon (536737)
http://slashdot.org/comments.pl?sid=350509&cid=21247745

Actually, it's not the attempt to mathify that I find problematic--I find that encouraging. It is, though, the results.

My (awesome) university philosophy professor had us do a very interesting exercise that was, though more logical than mathematical in nature, similar to what the author of TFA was going for. It goes like this...

Write down a belief that you have. For people new to this process (the entire class), this should be a strongly held belief...doesn't matter how controversial. Let's say, for example: I think abortion should be a woman's choice. (For you controversy-hounds out there, please don't mistake this for my actual belief--I'm intentionally not going to define my actual belief on this topic here.) Don't worry about getting the wording just right--you're free to revisit your initial statement as many times as you like throughout and revise it to more concisely represent your intent.

Now write down the set of "sub-beliefs" that you have which form the basis of your belief. For our example: 1. Life begins at conception. 2. Every life is equally valuable. 3. A life has no quantifiable value, but is inherently precious and ought to be protected if at all possible. Etc. Next we iterate, applying the same process to each belief listed. Obviously, you will very quickly diverge into an explosion of statements that resist corralling at every effort. Do not fret--I haven't told you about the thrust of the exercise yet.

(I should mention here that we did an entire section on identifying context-free statements, and we were asked to make our best effort to ensure that each statement was context-free, or as free of context as possible. "Context-free" means that the statement is true of our beliefs regardless of the circumstances in which the statement is tested. If that's not possible--and it's not often possible--we'd go for "generally" true, where "common sense"--whatever that is--dictates obvious exceptions.)

You will find it unnecessary to list each and every belief supporting your initial statement, which would quite likely fill several thick volumes if you did so exhaustively. Luckily, you don't have to do this to satisfy the point of the exercise, which is: where necessary, skip down to "lowest level" beliefs...that is, at some point you will mentally reach a point where you have identified a belief for which you have no further basis beliefs. When you reach this point, you have identified an axiomatic belief--that is, something you accept essentially on faith, on gut feeling, because you think it is correct. If possible, identify the key beliefs that go from your initial statement to the set of axiomatic beliefs identified.

The next step is to look at your beliefs, both axiomatic and intermediate, for consistency. In every case in carrying out this exercise, one will invariably find a whole host of contradictory statements. Then we did an iteration that attempts to resolve these conflicts by tweaking our initial statement, etc...provided we were tuning up the language to indicate real intent and not moving the statements further away from our actual beliefs, great. The ultimate idea is to identify our beliefs in all their gory, inconsistent, warty detail.

Then, we make up a list of so-called axiomatic beliefs and they are given to 5 random classmates (all double-blind, of course). You then are tasked with taking home those 5 lists of axiomatic beliefs and attempt to drill down further. If they are truly axiomatic, you won't be able to do this--the idea here is that you ultimately get back 5 people's analysis of your list and given another chance to continue the process--most of the time, it turns out you realize your axiomatic beliefs weren't axiomatic for you after all, and that you can actually drill down even more.

Anyway, it goes on like this, the ultimate point being that you arrive at some network of beliefs which you apparently do accept as axiomatic. The focus here is not on the logic that leads you down the path from the initial statement to the final list...the point is to show that your beliefs are not rigorously logical, even after you've done your level best to identify all the logical flaws, ultimately you wind up with a list of axiomatic beliefs that either directly or indirectly contradict each other to some extent. What these beliefs are, where the conflicts are, and how you resolve these conflicts all roughly correspond to your worldview.

As an entertaining add-on at the end of the course, the prof provided us with some very mild statistical metrics that told us how self-contradictory our beliefs were when pegged against our classmates, previous years, different types of statements (it was generally true that the more strongly held / the more controversial the statement, such as the abortion example above, the less self-consistent the foundational beliefs identified).

For a couple of years after doing this exercise, I found it very difficult to make strong statements of opinion about controversial topics. My mind would involuntarily start this process, identifying all the biggest logical hurdles and inconsistencies built into the statements I was about to make. This reflex also made me annoying to others with strongly held beliefs. :-)

User Journal

Journal Journal: Word from an Oregon Senator on software radio 3

I received a letter in response to a request by myself to Senator Ron Wyden (Oregon) on the topic of software radios. I pointed out that Open Source is often more secure than closed source, that a ban on open source would be a-priori restraint of trade that would probably be detrimental to the deployment and usefulness of such devices, and that the FCC's position on the matter did not appear to be justified by the facts. I tried to avoid the whole freedom argument, on the grounds that politicians are generally not elected by intellectuals. Over-priced, crippled technology that would probably be made elsewhere... that's an argument politicians can hear better.

(No insult intended to Senator Wyden, he may very well be extremely smart, but since I don't know him, the most logical thing for me to do is to insinuate all the areas that could dent his popularity and fund-raising potential.)

His response is interesting. Firstly, he agreed that Open Source can be more secure. A fair enough position to take, given the level of closed-source IT industry in Oregon, and far more generous than I'd have expected for that same reason.

His second comment - that many in the software industry have made identical - or near-identical - objections was fascinating. Politicians are extremely adept at saying what you want to hear - they have to be, it's their only way to survive in their line of work - but to the extent that IT industry leaders have complained, the Senate is apparently taking notice. They would appear to be aware now of Open Source - for good or bad - and are adjusting their thinking accordingly.

He goes on to say that he is not satisfied that the FCC's claims that closed-source will make the software more secure are correct and that banning open-source may be counter-productive to the FCC's objectives. Again, that's good. Whether he believes it or not, I don't know, but there's clearly enough doubt in his mind as to the wisdom of the FCC's course that he's willing to be in writing in saying that he believes Open Source could make for a more secure product and that the FCC's actions could backfire.

The last part is the part that unnerves me slightly. He says that if legislation comes before the Senate, he will keep my views in mind. He did NOT say he would oppose legislation that would ban Open Source software radios, only that he would keep in mind that I - and others - oppose such a ban. Nor did he say that he would make any effort to bring forward any legislation requiring the FCC to re-examine the issue or explain themselves.

Why is that unnerving? Because although he expresses disquiet, he won't commit himself to any actual action over it. Maybe I'm being too hard on him, but it bothers me intensely that he acknowledges my concerns are widespread in the industry but promises nothing. Not even so much as to ask the FCC why they're being so shirty on the issue. The letter is good, I appreciate his taking the time to, well, ask his secretary to probably print out a standard form letter, but that's not going to achieve results. Why should the FCC care how many form letters have been printed? Well, unless they have shares in the company making the envelopes.

A response that shows some sympathy is better than no response at all, but only if it is accompanied by action. I hope it does. I hope my mail to him made some useful contribution to the debate. I also hope that someday I'll win the lottery. I am curious as to which has the greater odds of success.

User Journal

Journal Journal: Oh goody. Exactly what I didn't need. 6

A person I designed an online store for didn't want to pay for it. That happens. They also turned out to be a gun and knife fanatic. No big deal, right? That depends on how you interpret the phrase "you'd better watch your back, if I were you". May this be a lesson to you all - never do software consulting work for a latent psychotic.
User Journal

Journal Journal: Who uses Freshmeat?

One thing that has often puzzled me, when working in places that use Open Source software, is how many people know of Slashdot (I'd say 75% or more read it daily) but how few were even aware Freshmeat existed. The same was true of an announcement service that tracked Open Source and shareware products. Yet those projects I track on Freshmeat (I own something like 150 records and am subscribed to something like twice that) show hundreds - sometimes thousands - of accesses after a new release. If the corporate sector is totally blind to Freshmeat, who is doing the accessing?

Looking at the numbers, I think I can hazard some guesses. Educational and Government places probably rank high in the user charts, as clustering and scientific software are often moderately or highly subscribed and show moderate to high activity after an update. The stats are also skewed towards servers and other administrative or maintenance software, so I'm guessing it's more used by admins than users, which is somewhat foolish as users should be the ones driving updates as they're the ones who know what functions they need and what bugs they experience.

The popularity of MPlayer is an odd one, as most users will get this from their distro and it's unlikely to be used for system maintenance. Nonetheless, it is more popular than any other package, including the Linux kernel. Even the Linux kernel is oddly placed, at second, as this is announced in so many different places, from LWN and Slashdot to the Linux Kernel Mailing List and LinuxHQ. Most software is only ever announced on its homepage and on Freshmeat only if someone has made a record for it and is keeping it up-to-date. The dilution of the Linux kernel announcements is so staggering that it is amazing that a single service would get so much attention.

I guess if we assume a heavy Government/Educational userbase, it's more understandable. Those are going to be places where heavy-duty mailing lists are not going to be an option, and where surfing websites on the off-chance of an announcement would be frowned upon.

If I'm correct, how do we interpret the numbers? The usage won't be a random sample of a complete cross-section of the population, it'll be a self-selecting group with relatively narrow interests that is largely built up from a relatively small segment of the Open Source userbase.

Well, why should we interpret the numbers? That's an easy one. Corporations resist software they consider "unpopular" or "unused", no matter how useful or productive it would be. They are staggeringly blind to reality. If you can produce meaningful usage estimates, and can defend them, it sometimes (not always) weakens resistance to vitally-needed updates and changes. If you can show that some project has been downloaded by tens of thousands of probable competitors, you can be damn sure that project will be on the server by the next morning, come hell or high water.

Some would argue that it doesn't matter - we get paid to do what we're told to do and to make the managers look good. That entire discussion could - and does - fill vast volumes, with no real answer. I've got my own thoughts, but that';s not really this discussion and I'd probably run Slashdot's servers out of disk space if I were to put them all down here.

Here, I am far more interested in knowing why the userbase for any announcement service should be self-limiting. I've seen places be utterly ignorant of what software exists or where it can be found. I've had people ask me how to search for programs or how I know about updates before the distros push the packages out. On the flip-side, as I've already pointed out, there are packages whose records show far greater levels of access than you would expect, given the availability of the same (or better) information elsewhere, sometimes much sooner.

Based on what I've seen, I am going to say that the records for "mission-critical" software and software of specific interest to one of the niches inhabiting Freshmeat will be relatively close to the actual levels of active interest. Passive interest (eg: users of a desktop Linux system are probably not actively interested in new kernel or glibc releases, but still use those updates) is probably a lot higher, but I don't think it's easily calculable. I'm going to guess that the number of people who actually download the source code is somewhere between two and five times the number who visit the site via Freshmeat.

For commercial and industrial software, I'm going to guess that Freshmeat numbers are way too low, that people discover packages by accident or media rumor, or outsource the updates to some group that use a commercial tracking/monitoring service. For this type of software, I'm guessing that the actual number of people impacted by announcements might be anywhere from five to fifty times the number given in the stats. There is no simple way of finding out who knows what, though, because there is nowhere to look.

However, when giving a presentation to managers on why product A is the one to go for, you can't be vague, you can't be hesitant and you absolutely can't be technical. That's why having a bit more certainty would be a good thing. Lacking any means of being certain, though, anyone in that position has to give some number that managers can use. I would take the URL access value from Freshmeat (the number who actually visited the site, not just the record) and scale it by the midpoint of the numbers I've suggested. It's not perfect, but it's almost certainly the best number you are going to be able to get as things stand.

Yeah, yeah, GIGO. But managers don't generally care about GIGO. They care that they have plausible and defendable numbers to work with. That is what they are getting. If you wait to give them something precise and accurate, you'll certainly be waiting until long after any decision has been made, and probably be waiting forever in many cases.

What if you're a home user? Plenty of those exist. Well, to home users, I would argue that updates from distros are typically slow in coming, that library version clashes are far too frequent, that permutations of configuration that may be interesting or useful usually won't be provided, and that even distros that build locally (Gentoo, for example) have massive problems with keeping current and avoiding unnecessary collisions.

If you're not specifically the sort of user served by the distro of your choice, you WILL find yourself building your own binaries, and you would be very strongly advised to be aware of all updates to those packages when they happen.

User Journal

Journal Journal: Meta: Is the new threading system messed up? 2

So it seems like the new discussion-threading system (aka "D2", according to the preferences page) no longer works for me.

I had just gotten used to it -- in particular, being able to click on comments to expand or collapse them -- and suddenly at some point this afternoon I reloaded a page and the whole thing just went away. I'm back to the regular discussion style, where clicking on the title of another user's post will open that post in a separate page.

However, I still have the new style selected in my preferences. I'm just curious whether this is a global problem or something specific to my network or configuration. I've tried disabling AdBlock and some other relevant FF extensions but no dice.

Anyone else noticing anything amiss?

Slashdot Top Deals

Say "twenty-three-skiddoo" to logout.

Working...