Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
User Journal

Journal Journal: Continuation on education 13

Ok, I need to expand a bit on my excessively long post on education some time back.

The first thing I am going to clarify is streaming. This is not merely distinction by speed, which is the normal (and therefore wrong) approach. You have to distinguish by the nature of the flows. In practice, this means distinguishing by creativity (since creative people learn differently than uncreative people).

It is also not sufficient to divide by fast/medium/slow. The idea is that differences in mind create turbulence (a very useful thing to have in contexts other than the classroom). For speed, this is easy - normal +/- 0.25 standard deviations for the central band (ie: everyone essentially average), plus two additional bands on either side, making five in total.

Classes should hold around 10 students, so you have lots of different classes for average, fewer for the band's either side, and perhaps only one for the outer bands. This solves a lot of timetabling issues, as classes in the same band are going to be interchangeable as far as subject matter is concerned. (This means you can weave in and out of the creative streams as needed.)

Creativity can be ranked, but not quantified. I'd simply create three pools of students, with the most creative in one pool and the least in a second. It's about the best you can do. The size of the pools? Well, you can't obtain zero gradient, and variations in thinking style can be very useful in the classroom. 50% in the middle group, 25% in each of the outliers.

So you've 15 different streams in total. Assume creativity and speed are normally distributed and that the outermost speed streams contain one class of 10 each. Start with speed for simplicity I'll forgo the calculations and guess that the upper/lower middle bands would then have nine classes of 10 each and that the central band will hold 180 classes of 10.

That means you've 2000 students, of whom the assumption is 1000 are averagely creative, 500 are exceptional and 500 are, well, not really. Ok, because creativity and speed are independent variables, we have to have more classes in the outermost band - in fact, we'd need four of them, which means we have to go to 8000 students.

These students get placed in one of 808 possible classes per subject per year. Yes, 808 distinct classes. Assuming 6 teaching hours per day x 5 days, making 30 available hours, which means you can have no fewer than 27 simultaneous classes per year. That's 513 classrooms in total, fully occupied in every timeslot, and we're looking at just one subject. Assuming 8 subjects per year on average, that goes up to 4104. Rooms need maintenance and you also need spares in case of problems. So, triple it, giving 12312 rooms required. We're now looking at serious real estate, but there are larger schools than that today. This isn't impossible.

The 8000 students is per year, as noted earlier. And since years won't align, you're going to need to go from first year of pre/playschool to final year of an undergraduate degree. That's a whole lotta years. 19 of them, including industrial placement. 152,000 students in total. About a quarter of the total student population in the Greater Manchester area.

The design would be a nightmare with a layout from hell to minimize conflict due to intellectual peers not always being age peers, and neither necessarily being perceptual peers, and yet the layout also has to minimize the distance walked. Due to the lack of wormholes and non-simply-connected topologies, this isn't trivial. A person at one extreme corner of the two dimensional spectrum in one subject might be at the other extreme corner in another. From each class, there will be 15 vectors to the next one.

But you can't minimize per journey. Because there will be multiple interchangeable classes, each of which will produce 15 further vectors, you have to minimize per day, per student. Certain changes impact other vectors, certain vector values will be impossible, and so on. Multivariable systems with permutation constraints. That is hellish optimization, but it is possible.

It might actually be necessary to make the university a full research/teaching university of the sort found a lot in England. There is no possible way such a school could finance itself off fees, but research/development, publishing and other long-term income might help. Ideally, the productivity would pay for the school. The bigger multinationals post profits in excess of 2 billion a year, which is how much this school would cost.

Pumping all the profits into a school in the hope that the 10 uber creative geniuses you produce each year, every year, can produce enough new products and enough new patents to guarantee the system can be sustained... It would be a huge gamble, it would probably fail, but what a wild ride it would be!

User Journal

Journal Journal: A Perspective on Privacy

No doubt people who've read my posts realize I'm concerned about the NSA spying issue, especially in light of the global cooperation in sharing information between spy networks run by other countries including Australia, New Zealand, Germany, and the UK. Even here in Canada our CSIS uses information collected on their behalf by the US NSA. It's already being abused, with information being fed to the DEA and from there on to police departments in the US, which has nothing to do with the original goal of "catching terrorists."

As my own ISP, SaskTel, leases servers in Florida, my email is monitored. My Google and Yahoo accounts are also monitored. There is no way for me to communicate any more without being tracked.

I've always expected this day would come, because when the internet protocol was designed, one of the key requirements were headers that identified the sender and receiver of data packets. There was no way around this, and there is still no way to avoid such identification (though it can be obfuscated to some degree by protocols like TOR.)

As computers have gotten more powerful, it was inevitable that humanity would have the capability to monitor all communications and track all users. It was just a question of when would it happen, and I must admit I'm surprised that we've come this far in my lifetime.

Unfortunately, it would seem the corporate-led fascists are the ones who are leading the charge. Governments whose leaders no longer respect the will of the people, nor even listen to the concerns of the people, but instead spin the lies suggested by their corporate masters. The world is all about the money nowadays.

Maybe some day we'll see a resurgance of humanism and a more equitable social order based on socialist ideals ala Star Trek, where people work for perks, not survival, but I don't think we're going to see that in my life time. Perhaps we'll never see it, because the more entrenched the elite owners of the corporate world become in their mastery of individual country's governments, the less likely it is that they can be uprooted and removed from the halls of power.

Still, I haven't given up hope on humanity.

I'm just very worried about where things are going to go in my own lifetime, never mind the lifetimes of my nieces and nephews.

Despite the tracking that is possible, people insist on using pseudonyms and aliases for their web accounts. I think that's fundamentally wrong. If you've got any sense of honour, integrity, and personal responsibility, you should not be afraid of having your comments and articles on the 'net associated with who you really are. In fact, you should be proud of who you are, stand up as an individual, and rant with enthusiasm against the evils of the world.

Sure you'll make mistakes. You'll say embarassing things. You'll shove your foot in your mouth up to the knee from time to time. And those mistakes will not be erased from the 'net.

But so what? Everyone is human. If anyone is in error, it's those who insist on judging people by their past mistakes instead of realizing that people screw up, learn from their mistakes, and grow to be better people because of them. I've certainly never worried about being judged by potential employers or friends on the internet.

After all, if I am anything, it is honest and blunt with my opinions. I am the kind of person I want to be and would want for a friend: trustworthy and blunt. I hate double-talking backstabbers with a passion, and wouldn't want to work for a company that would judge me based on my internet social life instead of my job history and quality of my work.

So rave on, rave on, rave on, I shall.

Peace.

Mark Sobkow

User Journal

Journal Journal: MSS Code Factory 1.11.6160 Beta 6 (Ok, so I'm not done with betas yet after all)

Beta 6 implements the table id generators for the RAM implementation and corrects a defect in the implementation of the RAM deletes.

It also corrects the use of table id generators for all of the supported databases (DB/2 LUW 10.1, MySQL 5.5, SQL Server 2012, PostgreSQL 9.1, Oracle 11gR2, and Sybase ASE 15.7.) Previously the client-side code that is generated for objects which incorporate BLOBs (or TEXT for SQL Server) would not have properly used the table id generators, but instead would have relied on obsolete/incorrect code for schema id generators of the same name.

All of the RAM and database implementations have regression tested using the CFDbTest 2.0 test suite.

Beta 6 and the corresponding test suite are available for download from http://sourceforge.net/projects/msscodefactory/files/.

User Journal

Journal Journal: MSS Code Factory 1.11.6008 - Beta 5 - The last of the betas

I finally reached Beta 5 with my pet project. It now supports manufacturing of code for DB/2 LUW 10.1, SQL Server 2012, MySQL 5.5, Oracle 11gR2, Sybase ASE 15.7, and PostgreSQL 9.1.

I've finally achieved what I set out to do 15 years ago -- provide a multi/cross database coding tool that automates the mapping from an abstract business model to the specifics of the database while using all of the available performance tuning options of the database. This is far more challenging and complex than something like EJB3, which just generates dynamic SQL, not stored procedures and prepared statements.

Next up will be using the tool to write an application. I'm thinking of doing something simple and straight forward, like the core of an accounting system with general ledger, accounts, subledgering, and so on. During that development I may well add in the security support I've been planning all these years, but maybe not. Time will tell.

Regardless, I'm just peaking to have finally achieved this long outstanding milestone. :)

User Journal

Journal Journal: MSS Code Factory 1.11.5365 Beta 1

The PostgreSQL 9.1 implementation has been updated to make use of stored procedures, prepared SQL statements, and every other performance-tuning trick I've learned in 30+ years of database programming. Subsequent betas will be released as additional databases are brought to the same level of integration as this release for PostgreSQL.

The PostgreSQL code should run rings around EJB3 and similar technologies that rely on dynamic SQL.

MySQL 5.5 support is as complete as it will ever be, and basic DB/2 LUW 10.1 support is also provided.

Download MSS Code Factory Beta 1 from SourceForge.

User Journal

Journal Journal: MSS Code Factory is moving right along 1

As you can see from the MSS Code Factory project site, things are progressing steadily with my pet project. I've just finished spending a couple of weeks reworking the PostgreSQL database IOs to use PreparedStatements wherever possible instead of pure dynamic SQL. At this point, dynamic SQL is only used for cursor-based reads and index queries which reference nullable columns; all other queries and accessors use prepared statements (static SQL.)

I haven't tested the performance of this new layer with PostgreSQL, and don't intend to compare performance of dynamic and static SQL as it would require keeping copies of and debugging both versions of the code. I know from previous experience with DB/2 UDB that using PreparedStatements can result in an 80% overall performance improvement for something like loading a model into a relational database.

Unfortunately most of the performance benefits would be lost when using the code for a web server, because you have to releasePreparedStatements() at the end of each web page served, because there is the possibility that a particular vendor's implementation of PreparedStatements might have data associated with it on the server end of the connection, and the connection has to be released after serving the page.

One of the biggest advantages of switching to static SQL is that parameter binding with PreparedStatements can handle variables up to the maximum size for the type, whereas dynamic SQL is limited by the size of the statement buffer accepted by the database (which used to be a significant limitation with DB/2 UDB 7.2, though I've no doubt that limit has been expanded or eliminated.)

A key point of the use of static SQL is that the only difference between the different databases now is the specific SQL functions used to convert strings to date-time types, so I'm going to be rolling out the support for the commercial databases under GPLv3 after all, rather than trying to leverage them for profit. The differences are just too negligable for me to believe anyone would pay for the privelege of using a commercial database.

User Journal

Journal Journal: I gave up and filed for disability

I've been working as a programmer since the spring of 1987. I've travelled all over North America, worked in many cities and with some of the biggest names in technology. I've had an absolute blast working with skilled and intelligent people who were not only good at what they did, but became good friends.

But it's time to face the facts: I can no longer work "office hour" jobs due to chronic migraines. Even with complete flexibility to work from home and at odd hours, I was barely able to get in 24-30 hours per week at the last company that was willing or able to work with me on the scheduling issues caused by the migraines.

I've therefore filed for disability here in Saskatchewan, and am in the process of getting approved for the SAID program (Saskatchewan Assured Income for Disability.) I used to pay twice as much in taxes per year in the '90s than I'll be getting under SAID, but at least it'll be subsistance living.

Don't make the same mistake I did of enjoying your income while you have it. Save and invest your money like it's going to be the last dollar you earn, because you never know when you're going to be hit by the proverbial bus and find yourself disabled. It's not fun, it's not a "safety net" as some claim, and it's a very depressing future to face.

But many of you will face that future, whether due to medical issues or accidents.

Good luck.

P.S.

I'm going to try to keep Singularity One Systems, Inc. alive because every once in a while I do find a few hundred bucks worth of offsite programming I can do for someone. With the company, I can "bank" that income, and draw the $200/month I'm allowed to on disability over time, as well as running a few expenses like monitors and part of my internet/phone fees through the company instead of paying it all out of pocket.

Who knows? Maybe some day one of my pet projects will turn into a money maker. I've always said I'd program for a hobby if I weren't programming for pay, and that's where life is headed: hobby programming to keep myself from being bored silly in "retirement."

Peace.

Books

Journal Journal: History books can be fun (but usually aren't and this is a Bad Thing) 2

Most people have read "1066 and all that: a memorable history of England, comprising all the parts you can remember, including 103 good things, 5 bad kings and 2 genuine dates" (one of the longest book titles I have ever encountered) and some may have encountered "The Decline and Fall of Practically Everybody", but these are the exceptions and not the rule. What interesting - but accurateish - takes on history have other Slashdotters encountered?

User Journal

Journal Journal: Joy oh joy 2

My Ubuntu 10.04.1 partition developed a serious case of USB problems after this morning's kernel update. When I rebooted to try to reset the USB devices, the partition table nuked itself.

So I'm reinstalling WinXP. This is NOT how I planned to spend my day!

Needless to say, I am NOT a happy camper...

Canada

Journal Journal: My new career path. 24

More here.

As a bonus , I'll probably soon reveal the unbelievable story of how I acquired my legal knowledge - by doing something nobody else ever has, and which, until now, would be considered pretty much impossible.

I'd rather not, because there is some danger involved, but it's necessary to achieve my goals in an open and transperent fashion.

Advice and help sought and welcome.

Open Source

Journal Journal: Yet another open source failure 14

Trying to print an envelope address in openoffice under linux? What a waste of time.

Do the people who code this sh*t actually ever use it? Or do they never use anything else, so they simply don't know that it's possible to do better?

Easy prediction - open source will never be competitive. When it's so bad that I'm tempted to throw a copy of XP (or even Wn95) on the box because linux on the desktop is still 2 decades behind the times anyway, there's a fundamental problem that obviously will never be fixed.

I really hate them, but my next computer is going to be a mac.

The Internet

Journal Journal: Every browser is *still* broken. 17

After 15 years, we still don't have an un-b0rked browser. CSS 2.1 was done in 1997, and yet firefox, opera, chrome, arora - they all render differently for non-trivial layouts.

15 years, and they still can't get the basics right. It means that the problem is not the implementation, but the underlying concepts that are flawed in fundamental ways.

And there's no blaming Microsoft or Apple for this fiasco.

No, we did this to ourselves. We're all suckers. The people setting the standards did it wrong, and we didn't immediately stone them to death, salt their fields, enslave their families for the next 3 generations, and all that other "Carthage must die!" goodness.

So we have let ourselves become slaves to stupidity.

What a waste of time, energy, brain cells, and just general aggravation. Have fun with html5 + css3, folks - you'll never see it finished in your lifetime, not even if you live for another 100 years.

Apple has it right - apps, not a stupid one-size-fits-nada web browser. Just like they have it right about not releasing stuff until it's good and ready.

Stupid browsers. Stupid us.

Programming

Journal Journal: NoSQL+ sprintf() == better. 7

Old technology doesn't die - it get re-implemented when newer ways get too bloated and turn everything it touches into Beavis and Butthead.

In the dying days of the last century (awk! - how time flies) I used to do web cgi using c, same as a lot of people. Used malloc and sprintfs() to insert variables into a "template" and then printf()s to output. It was easy to track memory allocation for such cases, so the whole "OMG you'll leak memory" issue was a non-starter.

And then along came the attack of the killer web scripting "pee" languages - php, perl, and to a lesser extent, python. The concept of a "templating language" evolved and eventually we ended up with "templating engines" - megabytes of code to make up for the shortfalls of the approach.

For example, output buffering. php includes stuff like ob_start() because even one stray newline emitted will prevent you from setting cookies on the client. c/c++ cgi programs didn't worry about a stray newline being output by an #include file because only printf() and putchar() would actually write stuff to stdout - so as long as you were just sprintf()ing to your format strings you were all good. In php, even one space before the opening tag or after the closing tag in index.php and you're hosed for sending cookies (which is why you should always omit the closing tag - the spec allows it).

Another advantage was that the ONLY character you needed to escape in any file you loaded as a template as a sprintf format string was the % symbol. No worrying about single or double quotes, angle brackets, or whatever.

For user input, the only sanitation needed was the left and right brackets (to prevent someone from entering raw html, such as script tags) and, again, the % symbol. No "escape_string", no "real_escape_string", no "really_really_escape_string", since the data was stored and read w/o needing sql.

In terms of performance and memory use, sprintf() easily beats regexes. You really can't help but notice the difference. And it sure beats the so-called "compiled templates" produced by templating engines like smarty.

Yet another advantage is portability - any language that supports sprintf() can be used w/o modifying your template files. This means that if you need the best possible performance on some really really HUGE files, you can always do it directly from a shell in c, or if you're so inclined, java.

So I decided to re-implement my old approach from scratch yesterday in a couple of hours in php. The entire code - including for variable range-checking, reading and writing data (strings and arrays), meta tag files, html, reading and parsing config files, getting and setting cookies, posts and gets along with verification and using sane defaults and coercing the values to those default types, loading templates, creating those little "go to page 1 2 3 4" clickies for larger web documents and everything else, is under 9k, including the site's index.php file.

THAT is a lot more maintainable than the 1.1 meg download for smarty templates (and smarty doesn't do the reading and type coercion from the client or the minmax range checking or some of the other stuff).

So, +130 files for smarty, or 2 for the old way (and one is index.php,so it really doesn't count ...)? Oh, and the template files look a LOT cleaner. For example, no embedded program logic like {include file='whatever'} in the templates, so stuff like

<input name="first_name" value=$smarty.get.first_name> // no default values!!!
<input name="last_name" value=$smarty.get.last_name> // no type coercion!!!
<input name="address" value=$smarty.get.address>
<input name="city" value=$smarty.get.city>
<input type="submit" value="Save">
<input type="reset">

becomes:

<input name="%s">, etc ...

... so your template looks like this instead:

<input name="first_name" value="%s">
<input name="last_name" value="%s">
<input name="address" value="%s">
<input name="city" value="%s">
<input name="age" value="%s">
<input type="submit" value="Save">
<input type="reset">

and your index.php file looks like

<?php
$BASE = '../'; all files live outside of public_html space
include "$BASE/php/libfoo.php";

$HTML = read_tpl("test_page"); // read_tpl automatically prepends "$BASE/tpl/", appends ".tpl" extension.

$css="my_skin_2";
$js = "new_js_lib";

$head = read_tpl("head");
$meta = read_meta("test_metadata");
$desc = $meta[0];
$keywords = $meta[1];

// want to test a new skin, new javascript libs
$HEAD = sprintf($head, $desc, $keywords, $css, $js);

$form = read_tpl("junk");
// get, post, cookie, gpc_pg, etc all sanitize the %, < and > symbols.
// also use an optional default value, and coerce any entered data to that type,
// so, if you ask for an integer and specify -42 as the default, anyone entering "FOO" returns -42
$first_name = get('first_name', 'Enter first name here');
$last_name = get('last_name', 'Enter first name here');
$address = get('city', 'Enter address here');
$city = get('address', 'Enter city here');
$age = get('age', -1);

// do any additional validation, data manipulation, etc.
// no need to do output buffering ... it's all in memory until you do the next line.
$FORM = sprintf($form, $first_name, $last_name, $address, $city, $age);

$footer = read_tpl("footer");
$FOOTER = sprintf($footer, "have a nice day!");

//okay, now write the whole thing
printf($HTML, $HEAD, $FORM, $FOOTER);

There is zero programming logic in the template itself - and that's the way it should be. Templates like smarty fail in the "presentation should be separate from code" department.

Plus, since most templates won't include variable names. they're pretty generic, again promoting template re-use. The footer, for example, could contain the output of several other templates instead of a simple message, and you'd never touch the main page template OR the footer template.

User Journal

Journal Journal: Thoughts on the entangled-quantum future

In the future, and a not too distant future at that, we will have quantum-entangled computers that work alongside or as add-ons to our existing computers.

Entangled quantum processors are good at the very class of computing problem that traditional CPUs suck at. And the reverse is also true, so we won't all be switching to quantum computers, we'll be merging the two technologies into a single box capable of tackling both classes of computing problem efficiently.

The issue to society is that current encryption technologies rely on the difficulty of calculations of precisely the type that quantum computers are good for. In the quantum era, it will be effectively impossible to encrypt data in a secure fashion. If you vary your keys fast enough, you might be able to maintain some semblance of security for a specific communications link to another node on the internet, but that would be about it.

That means that all the information on all the centralized data servers running behind every major business or website on the internet is readable.

I realized this years ago. It's one of the reasons I post publicly -- because I know the futility of trying to conceal or limit the access to what I post on the internet.

And it will happen in my lifetime, of that I have no doubt.

I contend that the only way to secure personal data in that future is to have personal servers located at your own home, with maintenance scripted so thoroughly that all the user has to do is pop in a backup cartridge each evening to receive the daily incrementals and weekly full backups of their life.

Instead of you entering in your information to a shared server somewhere, you would grant that shared server's processing systems read-only access to the relevant parts of your information, identified by some sort of unique id code/string (maybe even just a UUID) and the specific IPv6 address of the single host that is being granted that read permission.

Just for safety's sake, every time the application server read your personal information, an access entry would be logged.

It would be forbidden for any application server to retain the data. The sole source of your personal information would be your home node itself.

Sure, some might choose to contract the hosting of that node out to something akin to an ISP or a Google or a MicroSoft, or even an IBM node in a data center/cluster some where, but the key point is that the IPv6 address of each and every individuals information be assigned to one particular node.

I can not imagine any other way of protecting your personal data in the quantum future.

And that's the future I'm building towards.

Your node would assign each application server a corresponding signature, the UUID. The unique id number generator. Basic, simple, effective, and in production for a long time. But hardly anything akin to a password.

Maybe you'd want to look into how the data center at the host is physically architected to protect the token.

Just remember that with the quantum capabilities, passwords will be easily cracked and stolen by anyone with access to a backbone link that can have a good old fashioned network sniffer attached. You're rely relying on the request coming from that particular IPv6 address with the assigned UUID as the unique signature of the authorized request.

Implementing such a system means implementing common data structure standards across all platforms and all systems in due time. You'd choose your hardware/node provider based on your faith in the quality of the system they deliver as a whole.

So you could buy an IBM stack, an Oracle stack, a MicroSoft Windows stack, an Apple stack, or any one of the many Linux and BSD stacks.

Or even smartphone and tablet OS stacks.

Similarly, you'd choose your database service provider from the supported RDBMS vendors, your file system, and so on. Some stack vendors don't let you choose some options, but that's part of what you get when you buy into their stack.

Slashdot Top Deals

"The one charm of marriage is that it makes a life of deception a neccessity." - Oscar Wilde

Working...