Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Education

Journal Journal: HOWTO: Run an educational system 1

The topic on Woz inspired me to post something about the ideas I've been percolating for some time. These are based on personal teaching experience, teaching experience by siblings and father at University level and by my grandfather at secondary school, 6th form college and military acadamy. (There's been a lot of academics in the family.)

Anyways, I'll break this down into sections. Section 1 deals with the issues of class size and difference in ability. It is simply not possible to teach to any kind of meaningful standard a group of kids of wildly differing ability. Each subject should be streamed, such that people of similar ability are grouped together -- with one and only one exception: you cannot neglect the social aspect of education. Some people function well together, some people dysfunction well together. You really want to maintain the former of those two groups as much as possible, even if that means having a person moved up or down one stream.

Further, not everyone who learns at the same pace learns in the same way. Streams should be segmented according to student perspective, at least to some degree, to maximize the student's ability to fully process what they are learning. A different perspective will almost certainly result in a different stream. Obviously, you want students to be in the perspective that leads them to be in the fastest stream they can be in.

There should be sufficient divisions such that any given stream progresses with the least turbulence possible. Laminar flow is good. There should also be no fewer than one instructor per ten students at a secondary school level. You probably want more instructors in primary education, less at college/university, with 1:10 being the average across all three.

Section 2: What to teach. I argue that the absolute fundamental skills deal in how to learn, how to research, how to find data, how to question, how to evaluate, how to apply reasoning tools such as deduction, inference, lateral thinking, etc, in constructive and useful ways. Without these skills, education is just a bunch of disconnected facts and figures. These skills do not have to be taught directly from day 1, but they do have to be a part of how things are taught and must become second-nature before secondary education starts.

Since neurologists now believe that what is learned alters the wiring of the brain, the flexibility of the brain and the adult size of the brain, it makes sense that the material taught should seek to optimize things a bit. Languages seem to boost mental capacity and the brain's capacity to be fault-tolerant. It would seem to follow that teaching multiple languages of different language families would be a Good Thing in terms of architecturing a good brain. Memorization/rote-learning seems to boost other parts of the brain. It's not clear what balance should be struck, or what other brain-enhancing skills there might be, but some start is better than no start at all.

Section 3: How to test. If it's essential to have exams (which I doubt), the exam should be longer than could be completed by anyone - however good - within the allowed time, with a gradual increase in the difficulty of the questions. Multiple guess choice should be banned. The mean and median score should be 50% and follow a normal distribution. Giving the same test to an expert system given the same level of instruction as the students should result in a failing grade, which I'd put at anything under 20% on this scale. (You are not testing their ability to be a computer. Not in this system.)

Each test should produce two scores - the raw score (showing current ability) and the score after adjusting for the anticipated score based on previous test results (which show the ability to learn and therefore what should have been learned this time - you want the third-order differential and therefore the first three tests cannot be examined this way). The adjusted score should be on the range of -1 (learned nothing new, consider moving across to a different perspective in the same stream) to 0 (learned at expected rate) to +1 (learning too fast for the stream, consider moving up). Students should not be moved downstream on a test result, only ever on a neutral evaluation of some kind.

Section 4: Fundamentals within any given craft, study or profession should be taught as deeply and thoroughly as possible. Those change the least and will apply even as the details they are intertwined with move in and out of fashion. "Concrete" skills should be taught broadly enough that there is never a serious risk of unemployability, but also deeply enough that the skills have serious market value.

Section 5: Absolutely NO homework. It's either going to be rushed, plagarized or paid-for. It's never going to be done well and it serves no useful purpose. Year-long projects are far more sensible as they achieve the repetitious use of a skill that homework tries to do but in a way that is immediately practical and immediately necessary.

Lab work should likewise not demonstrate trivial stuff, but through repetition and variation lead to the memorization of the theory and its association with practical problems of the appropriate class.

Section 6: James Oliver's advice on diet should be followed within reason - and the "within reason" bit has more to do with what food scientists and cookery scientists discover than with any complaints.

Section 7: Go bankrupt. This is where this whole scheme falls over -- to do what I'm proposing seriously would require multiplying the costs of maintaining and running a school by 25-30 with no additional income. If it had a few billion in starting capital and bought stocks in businesses likely to be boosted by a high-intensity K-PhD educational program, it is just possible you could reduce the bleeding to manageable proportions. What you can never do in this system is turn a profit, although all who are taught will make very substantial profits from such a system.

User Journal

Journal Journal: I don't know which is scarier

That I am old enough to remember where my current .sig came from, or that nobody else is.....! For those who are suffering from a memory lapse, here is the sig: "The world is in darkness. To erase data is to suppress truth; to halt computing is to shackle the mind."

Ok, ok, you're too lazy to google it, so here's the link: Son of Hexadecimal Kid

User Journal

Journal Journal: Automotive Security

According to the Center for Automotive Embedded Systems Security, there are serious security flaws in the existing technology. Not necessarily a big deal, for now, as they observe that the risks are low at the current time. Emphasis on "current". They also state that no crackers have been observed to use the required level of sophistication. Again, emphasis needs to be on "observed". Yes, it may well be a while before automotive networks reach the point where this is exploited in the wild (at least to any scale), but I would remind you that it took Microsoft from Windows 3.0 through to Windows XP Service Pack 2 to take security even remotely seriously. That's a long, long time. And Microsoft had nothing like the install-base of the car industry. Further, the qualifications required by most companies to be a system administrator were a good deal steeper than the requirements for a car mechanic, so systems administrators were likely far more familiar with the issues involved. Also, said systems administrators are far more accountable for security issues, since there are plenty of third-party tools that novice users can use to spot malicious software.

The first question is why this even matters. It doesn't affect anyone today. No, but it's guaranteed to affect at least some current Slashdot readers in their lifetime and, depending on how rapidly car networks develop, may affect a significant fraction surprisingly fast. Technology doesn't move at Stone Age speeds any more. Technology advances rapidly and you can't use obsolete notions of progress to determine what will happen next year or over the next decade.

The second question is what anyone could seriously do, even if it was an issue. Not too many Slashdotters own automotive companies. In fact, I doubt if ANY Slashdotters own automotive companies. Well, the validation tools are Open Source. MISRA has a fair few links to members and software packages. In fact, even if developers just developed an understanding of MISRA's C and C++ specifications it might be quite valuable as it would allow people to understand what is being done (if anything) to improve reliability and to understand how (if at all) this impacts security. You don't get reliability for free, there will be some compromises made elsewhere.

User Journal

Journal Journal: Has anyone had problems with DB companies? What therapies work with bosses? 4

I've been having problems with Enterprise DB. This company maintains the Windows port of Postgres, but I have been finding their customer service.... less than satisfactory. This is the second time in, oh, 21 years that I've actually been infuriated by a company. However, to be entirely fair to the business and indeed the sales person, it is entirely possible this was a completely freak incident with no relationship to normal experience. There were all kinds of factors involved, so it's a messy situation all round, but the hard-sell aggressiveness and verbal abuse went way beyond what I have ever experienced from a professional organization in two DECADES. What I want to know from other Slashdotters is whether this is about on-par with the tales of meteorites landing on someone's sofa (which is my personal suspicion) or whether it's a more insidious issue. Please, please, please, do not take one incident as a general rule. I've not seen any article on Slashdot or LWN reporting wider issues with them, which you know perfectly well would have happened had there been a serious, widespread problem. Especially with all of the reporting on database issues over recent times and the search for alternatives to MySQL once leading developers defected and major forks arose.

This is, however, a major question. Like it or not, we need databases we can rely on and trust, which means that when they are backed by companies, we need the companies that back them to be honorable. (PostgreSQL itself isn't owned, so I trust the engine itself just fine. The development team is very impressive - and, yes, I do monitor the mailing lists.) Value-added only has any added value if it's valuable.

What is worse, from my perspective, is that my current boss is now treating it like this is how companies work when reselling Open Source products. His practical experience was being on the receiving end of all this. If we're to take advantage of the freedom (and bloody high quality) provided in the Open Source world, I need to deprogram him of the notion that they give hassle and sell grief. Does anyone have any experience doing this?

User Journal

Journal Journal: Save TV for Geeks! 2

A petition calling for the return of perhaps the most important television show since The Great Egg Race is currently running but isn't exactly getting anywhere fast. It is vitally important that intellectually-stimulating shows be encouraged -- the consequence of failure (24 hours of Jersey Shore on all channels) is too horrible to contemplate. Unfortunately, as things stand, that's exactly what we are heading towards. Save your television and your mind before it's too late!

User Journal

Journal Journal: 1-3% of all mainstream stars have planets?

The venerable BBC is reporting that a survey of light emitted from white dwarfs showed that between 1% and 3% had material (such as silicon) falling into the star on a continuous basis, potential evidence of dead worlds and asteroids. On this basis, the authors of the study speculate that the same percentage of mainstream stars in the active part of their life will have rocky matter. This is not firm evidence of actual planetary formation, as asteroids would produce the same results, but it does give an upper bound and some idea of what a lower bound might be for planetary formation.

Aside from being a useful value for Drake's Equation, the rate of planetary formation would be valuable in understanding how solar systems develop and what sort of preconditions are required for an accretion disk of suitable material to form.

Because the test only looked for elements too heavy to have been formed in the star, we can rule out the observations being that of cometary debris.

User Journal

Journal Journal: Fireball, but not XL5 3

Four fireballs, glowing blue and orange, were visible last night over the skies of the Carolinas on the southeast coast of the United States, followed by the sound of an explosion described as being like thunder. Reports of hearing the noise were coming in from as far afield as Connecticut. There is currently no word from NASA or the USAF as to what it could be, but it seems improbable that anything non-nuclear the military could put up could be heard over that kind of distance. It therefore seems likely to be a very big meteorite.

The next question would be what type of meteorite. This is not an idle question. The one slamming into the Sudan recently was (a) extremely big at an estimated 80 tonnes, and (b) from the extremely rare F-class of asteroid. If this new meteorite is also from an F-class asteroid, then it is likely associated with the one that hit Sudan. This is important as it means we might want to be looking very closely for other fragments yet to hit.

The colours are interesting and allow us to limit what the composition could have been and therefore where it came from. We can deduce this because anything slamming through the atmosphere is basically undergoing a giant version of your basic chemistry "flame test" for substance identification. We simply need to look up what metals produce blue, and in so doing we see that cadmium does produce a blue/violet colour, with copper producing more of a blue/green.

Other metals also produce a blue glow and tables of these colours abound, but some are more likely in meteoric material than others. Cadmium exists in meteorites. Well, all elements do, if you find enough meteorites. but it exists in sufficient quantity that it could produce this sort of effect. (As noted in the chemmaster link, low concentrations can't be detected by this method, however this is going to be vastly worsened by the fact that this isn't a bunsen burner being used and the distance over which you're observing is extreme.)

Ok, what else do we know? The fireballs were also orange. Urelites, such as the Sudan impact, contain a great deal of calcium, which burns brick-red, not orange. This suggests we can rule out the same source, which in turn means we probably don't have to worry about being strafed the way Jupiter was with the Shoemaker-Levy comet (21 impacts).

What can we say about it, though? Well, provided the surviving fragments didn't fall into the ocean, it means every meteorite hunter on the planet will be scouring newspaper stories that might indicate where impacts occurred. Meteoric material is valuable and anything on a scale big enough to be heard across the entire east coast of the US is going to be worth looking for. It had split into four in the upper atmosphere, so you're probably looking at a few thousand fragments reaching ground level that would exceed a year's average pay.

User Journal

Journal Journal: What constitutes a good hash anyway? 3

In light of the NIST complaint that there are so many applicants for their cryptographic hash challenge that a good evaluation cannot be given, I am curious as to whether they have adequately defined the challenge in the first place. If the criteria are too loose, then of course they will get entries that are unsuitable. However, the number of hashes entered do not seem to be significantly more than the number of encryption modes entered in the encryption mode challenge. If this is impossible for them to evaluate well, then maybe that was also, in which case maybe we should take their recommendations over encryption modes with a pinch of salt. If, however, they are confident in the security and performance of their encryption mode selections, what is their real objection in the hashing challenge case?

But another question one must ask is why there are so many applicants for this, when NESSIE (the European version of this challenge) managed just one? Has the mathematics become suddenly easier? Was this challenge better-promoted? (In which case, why did Slashdot only mention it on the day it closed?) Were the Europeans' criteria that much tougher to meet? If so, why did NIST loosen the requirements so much that they were overwhelmed?

These questions, and others, look doomed to not be seriously answered. However, we can take a stab at the criteria and evaluation problem. A strong cryptographic hash must have certain mathematical properties. For example, the distance between any two distinct inputs must be unconnected to the distance between the corresponding outputs. Otherwise, knowing the output for a known input and the output for an unknown input will tell you something about the unknown input, which you don't want. If you have a large enough number of inputs and plot the distance of inputs in relation to the distance in outputs, you should get a completely random scatter-plot. Also, if you take a large enough number of inputs at fixed intervals, the distance between the corresponding outputs should be a uniform distribution. Since you can't reasonably test 2^512 inputs, you can only apply statistical tests on a reasonable subset and see if the probability that you have the expected patterns is within your desired limits. These two tests can be done automatically. Any hash that exhibits a skew that could expose information can then be rejected equally automatically.

This is a trivial example. There will be other tests that can also be applied automatically that can weed out the more obviously flawed hashing algorithms. But this raises an important question. If you can filter out the more problematic entries automatically, why does NIST have a problem with the number of entries per-se? They might legitimately have a problem with the number of GOOD entries, but even then all they need to do is have multiple levels of acceptance and an additional round or two. eg: At the end of human analysis round 2, NIST might qualify all hashes that are successful at that level as "sensitive-grade" with respect to FIPS compliance, so that people can actually start using them, then have a round 3 which produces a pool of 3-4 hashes that are "classified-grade" and a final round to produce the "definitive SHA-3". By adding more rounds, it takes longer, but by producing lower-grade certifications, the extra time needed to perform a thorough cryptanalysis isn't going to impede those who actually use such functions.

(Yes, it means vendors will need to support more functions. Cry me a river. At the current scale of ICs, you can put one hell of a lot of hash functions onto one chip, and have one hell of a lot of instances of each. Software implementations are just as flexible, with many libraries supporting a huge range. Yes, validating will be more expensive, but it won't take any longer if the implementations are orthogonal, as they won't interact. If you can prove that, then one function or a hundred will take about the same time to validate to accepted standards. If the implementations are correctly designed and documented, then proving the design against the theory and then the implementation against the design should be relatively cheap. It's crappy programming styles that make validation expensive, and if you make crappy programming too expensive for commercial vendors, I can't see there being any problems for anyone other than cheap-minded PHBs - and they deserve to have problems.)

User Journal

Journal Journal: Beowulf MMORGs 3

Found this interesting site, which is focussing on developing grid computing systems for gaming. The software they seem to be using is a mix of closed and open source.

This could be an important break for Linux, as most of the open source software being written is Linux compatible, and gaming has been the biggest problem area. The ability to play very high-end games - MMORGs, distributed simulators, wide-area FPS, and so on, could transform Linux in the gaming market from being seen as a throwback to the 1980s (as unfair as that is) to being considered world-class.

(Windows machines don't play nearly so nicely with grid computing, so it follows that it will take longer for Microsoft and Microsoft-allied vendors to catch up to the potential. That is time Linux enthusiasts can use to get a head-start and to set the pace.)

The question that interests me is - will they? Will Linux coders use this opportunity of big University research teams and big vendor interest to leapfrog the existing markets completely and go straight for the market after? Or will this be seen as not worth the time, the same way that a lot of potentially exciting projects have petered out (eg: Open Library, Berlin/Fresco, KGI, OpenMOSIX)?

User Journal

Journal Journal: The Lost Tapes of Delia Derbyshire

Two hundred and sixty seven tapes of previously unheard electronic music by Delia Derbyshire have been found and are being cataloged.

For those unfamiliar with Delia Derbyshire, she was one of the top pioneers of electronic music in the 1950s and 1960s. One of her best-known pieces was the original theme tune to Doctor Who. According to Wikipedia, "much of the Doctor Who theme was constructed by recording the individual notes from electronic sources one by one onto magnetic tape, cutting the tape with a razor blade to get individual notes on little pieces of tape a few centimetres long and sticking all the pieces of tape back together one by one to make up the tune".

Included in the finds was a piece of dance music recorded in the mid 60s, examined by contemporary artists, revealed that it would be considered better-quality mainstream today. Another piece was incidental music for a production of Hamlet.

The majority of her music mixed wholly electronic sounds, from a sophisticated set of tone generators and modulators, and electronically-altered natural sounds, such as could be made from gourds, lampshades and voices.

User Journal

Journal Journal: Well, this is irritating. 3

Someone has trawled through YouTube and flagged not only the episodes of The Tripods, but also all fan productions, fan cine footage and fan photography of the series. How so, can't you buy it on DVD? Only the first season, the second exists only in pirated form at scifi conventions, and of course the fan material doesn't exist elsewhere at all. The third season, of course, was never made, as the BBC had a frothing xenophobic hatred of science fiction at the time. (So why they made a dalek their general director at about that time, I will never know...)

What makes this exceptionally annoying is that the vast bulk of British scifi has been destroyed by the companies that produced it, the vast bulk of the remainder has never seen the light of day since broadcast, and the vast bulk of what has been released has been either tampered with or damaged in some other way, often (it turns out later) very deliberately, sometimes (again it turns out later) for the purpose of distressing the potential audience.

I've nothing against companies enforcing their rights, but when those companies are acting in a cruel and vindictive fashion towards the audience (such as John Nathan Turner's FUD of audiences being too stupid to know what they like, or too braindead to remember what they have liked), and the audiences vote with their feet, on what possible grounds can it be considered justified for those companies to (a) chain the audience to the ground, and (b) then use the immobility of the audience to rationalize and excuse the abuse by claiming the audience isn't going anywhere?

I put it to the Slashdot Court of Human/Cyborg Rights that scifi fans are entitled to a better, saner, civilized explanation, and that whilst two wrongs can never make a right, one wrong is never better.

User Journal

Journal Journal: 1nm transistors on graphene

Well, it now appears the University of Manchester in England has built 1nm transistors on graphene. The article is short on details, but it appears to be a ring of carbon atoms surrounding a quantum dot, where the quantum dot is not used for quantum computing or quantum states but rather for regulating the electrical properties. This is still a long way from building a practical IC using graphene. It is, however, a critical step forward. The article mentions other bizare behaviours of graphene but does not go into much detail. This is the smallest transistor produced to date.
PC Games (Games)

Journal Journal: Scientific and Academic Open Source - Hotspots, Black Holes

One of the most fascinating things I've observed in searching for Open Source projects available for whatver I'm doing at the time is the huge disparity of what is available, how it is used and who is interested.

An obvious place to start is in the field of electronics. Computer-based tools are already used to build such stuff, so it's a natural replacement, right? Well, almost. There are tools for handling VHDL, Verilog and SystemC. There are frameworks for simulating both clock-based and asychronous circuits. You can do SPICE simulations, draw circuit diagrams, download existing circuits as starting points or places of inspiration, simulate waveforms, determine coverage and design PCBs. OpenCores provides a lot of fascinating already-generated systems, SUN provides the staggering T1 and T2 UltraSPARC cores, and the Sirocco 64-bit SPARC. This field has probably not got anywhere near what it needs, but it has a lot.

Maths is another obvious area. Plenty of Open Source tools for graphing, higher order logic, theorum provers, linear algebra, eigenvalues, eigenvectors, signal processing, multiple-precision, numerical methods, solvers for all kinds of other specific problem types, etc.

What about astronomy? That requires massive table data crunching, correlation of variations, moving telescopes around with absolute precision - things computers tend to be very good at. There are a few. Programs for capturing images are probably the most common, although some telescopes provide software for controlling telescopes, obtaining data and performing basic operations. Mind you, how much more than this does one need in software? Some things are better done in hardware (for now, at least) because the software hasn't the speed. Yes, the control software seems a little specialized, but it'd be hard to make something like that general-purpose.

Chemistry. Hmmm. Lots of trivial stuff, more educational than valuable - periodic tables, 3D models of molecules, LaTeX formatting aids. There's a fair amount on the study of crystals and crystallography, which is as much chemistry as it is physics, but there's not a lot else. Chemistry involves a lot of tables (which would be ideal for a standardized database), a lot of mathematical equations, formulae, graphing, measuring and correlating all sorts of data, the consequences of different filtering and separation techniques, the wavelength and intensity of energies, analysis of the results of atomic mass spectrometry or other noisy data, etc. I see the underlying tools for doing some (but not all) of these things, but I don't see the heavy lifting.

Archaeology has very few non-trivial tools. Some signal processing for ground-penetrating RADAR, but there are virtually no tools out there that could be useful for helping with interpretation. In fact, most RADAR programs don't interpret either but display the result on a small LCD screen. Nor do any tools exist for correlating interpretations (other than manually via an extremely naive - for this purpose - GIS database). There's a few scraps here and there, but signal analysis and GIS seem to be about it, and those were mostly developed for mining companies and tend to show it.

Biology has plenty of DNA sequencing code. By now, Slashdotter should be able so sequence eith own DNA, not pay someone a thousand to do it. You mean, those aren't enough, that you need more hardware? And a lot more software? It's an important step, but it's not unique.

Mechanical Engineering. I haven't seen anything of any significance.

Geology. Not really, beyond the same software for Archaeology, but using it for find seams in rock.

Psychology: Nada.

Psychiatry: None.

Sports: Lots of software getting used, but little of it is open source.

Result - those who gain with the least to lose and the most to win make the change. Those who feel like there's no benefit from changing what they're doing will continue doing what they're doing. My suggestion? There are gaping holes in Open Source. Fill them in.

Slashdot Top Deals

"If it ain't broke, don't fix it." - Bert Lantz

Working...