Please create an account to participate in the Slashdot moderation system


Forgot your password?

Resources for Programming Course TA? 97

cndrr asks: "I'm a Teacher's Assistant for intro to Java at my university again this fall. The last time I taught, all the TAs had students turn in their assignments through email. I'm thinking about scripting a site that will let students turn in their programs automatically and in some cases, run the program and (based on the output) automatically grade it. Has anyone else TAed and found a good solution that they would recommend?"
This discussion has been archived. No new comments can be posted.

Resources for Programming Course TA?

Comments Filter:
  • Don't do it. (Score:4, Insightful)

    by Andrew Sterian ( 182 ) <> on Saturday July 22, 2006 @04:41PM (#15763913) Homepage
    You'll waste as much time setting up and tweaking the system as you will doing it manually. Automatic submission and sorting into folders by course section is simple enough, but running the program and automatically grading the output??? That's madness.

    Besides, trying to distance yourself from your students as much as possible by using technology is the exact opposite of what teaching is supposed to be about. If the students know that a real human will be reading the output and providing constructive feedback, they're much more likely to take it seriously.
    • Its also a security nightmare- you'd be compiling and running unknown code. Not a good idea.
      • Re:Don't do it. (Score:3, Insightful)

        by wed128 ( 722152 )
        If he were to implement a system like this, i'm sure he'd use a secure sandbox. When teaching programming courses, anything more complex then "hello world" should be run as part of the grading process.
        • Considering that this is a Java course we're talking about, setting up a sandbox is easy and effective; use a policy file [] with only the permissions the student's code needs (for simple programs that don't need file I/O, a blank policy file could even be enough). The result should be roughly as safe as running Java applets on the net and for the same reasons.
      • Re:Don't do it. (Score:4, Insightful)

        by feed_me_cereal ( 452042 ) on Saturday July 22, 2006 @05:52PM (#15764122)
        If any student taking this class is good enough to circumvent the security for grading, they don't need to be taking the class and certainly don't need to worry about their grade.

        If any student taking the class actually tried something like this, they would be severely busted as all the evidence would be right there in their code (so long as the system is like ours, where submissions are handled and logged externally from where they're tested).
        • Yeah, because plagiarizing another student's code and changing the names of the variables requires great genius.
          • Especially when students change "left" to "l" and "this" to "that". Many of them never bother to compile afterwards.
          • that doesn't really have anything to do with circimventing security to change your grade, though. Actually, the automated graders make it *very* easy to catch cheaters this way. I always look over each sheet by hand, and if the grading script comes up with the same errors (or no errors) on two different submissions, I check them against eachother. I bust 2-3 people a semster this way.
    • Re:Don't do it. (Score:4, Insightful)

      by Anonymous Coward on Saturday July 22, 2006 @05:15PM (#15764027)
      You'll waste as much time setting up and tweaking the system as you will doing it manually.

      It really depends on the setup at his school. When I was a TA, we had one "head TA" who would take care of making automatic grading tools, and as a result the rest of us probabaly spent a lot less time doing tedious grading then we needed to.

      Besides, trying to distance yourself from your students as much as possible by using technology is the exact opposite of what teaching is supposed to be about.

      Agreed, but when grading programming assignments, part of the grade comes from being able to produce the correct output, and a good automated grading tool can make sure that students are graded consistently on that. Sure, you should still grade on style and give feedback and all that, but when strictly grading the output, who really cares if it was graded by a machine?
      • On top of that, in my experience, we don't normally chillax _with_ the students while grading.
      • The problem with strictly grading the output though, is that sometimes you end up really screwing students for no reason. We had one class that tried this, essentially it would just hit it with some input and then see if the output was correct. You had 10 chances to submit, each one that got rejected was 10% off. I can remember two very distinct problems with the system, the first being the TA's lack of spelling ability and the second being a ridiculusly strict guidelines on the output, some of which was
    • Re:Don't do it. (Score:5, Insightful)

      by kbielefe ( 606566 ) <> on Saturday July 22, 2006 @05:30PM (#15764072)
      It's not madness. I actually preferred my classes with automated output checking, for the following reasons:
      • It forces the teacher to make very clear program specifications.
      • It forces the teacher to make a reference implementation of their own assignment, so they will see for themselves where potential problems are.
      • Teachers provide some sample test cases and solutions, so that students can get instant feedback on the correctness of the program, even in the middle of the night. The final grading used those test cases, and some surprise ones to make sure the students thought about the entire algorithm and didn't just write to the test cases.
      • The automated grading only ever counted for maybe 70% of the grade. I never had a class with automated grading where a human didn't check for style, implementation, simple mistakes, etc.
      • The grader can still look at the output of failed test cases to provide constructive feedback, but they don't have to waste their time looking at it when the student got it right.
      • I always knew exactly what a certain percentage of my grade would be, before I turned in the assignment.
      As for implementation, all you need is a script that compiles the student's program, runs it against the test cases, then does a diff against the output from the reference implementation, and records how many test cases passed.
      • That's great and all, but wait until you're taking a compilers course and are subject to the restriction that your generated code be precisely identical to some reference code, because grading was automatic.

        I'd say while taking that course, I spent 20% of my programming time on making the output CORRECT. The remaining 80% of the time was spent making it EXACTLY MATCH the reference. What a pointless waste of time.

        And to anybody about to suggest that I gained experience "programming to an exact spec," wel

        • I agree that automated checking would be pointless in the case where there is more than one correct answer, unless the grader checks for all possible correct answers. I had an artificial intelligence class with automated grading that worked that way.

          Ironically, both my undergrad and graduate compiler courses had very little programming. Doing compiler optimizations by hand for a whole semester can really make you appreciate compilers.

    • by feed_me_cereal ( 452042 ) on Saturday July 22, 2006 @05:43PM (#15764105)
      For you and all the other people saying human eyes are better than a computer for grading:

      They don't need to be a substitute, and they can help a great deal in doing a lot of the manual checking you would be doing anyway, as well as the organizational part. I'm a TA, and I use a script to check for just about every simple mistake I can think of, and then I go over every assignment by hand, with a printout of my test-script's results. The script doesn't so much grad ethe work, but point out any output that might not be exactly what I expect, over an assortment of tests. The students often comment that these scripts, which I also hand back with the assignments along with output from the scripts, help a lot to identify not only what their problems were, but what sorts of things they should have in mind when writing their programs. I then *carefully* go through their code by hand to insure they were using good style and didn't coincidentally happen to pass any test with code that isn't technically correct.

      When you have over 60 students in a class and have to grade long programming assignments every week, these scripts are essential to getting my work done in a timely manner. My personal attention and comments are not replaced by my scripts, but are enhanced by my scripts. My time spent grading is made *more* effective.

      Also (for cndrr): I might be able to provide the main part of the script we use, I'd just have to check with my instructor. The test scripts we use are fairly easy to write using it, if you don't mind doing it partially in scheme... (our department is in love with that language). Let me know if you're interested.
      • by iMaple ( 769378 ) on Saturday July 22, 2006 @06:05PM (#15764152)
        I was a TA for a programming course and we had managed to get a fairly automated system running very smoothly. It was a huge class with 7+ TA's and the submission script automatically alloted each TA his/her share (randomly to ensure fairness). The automated test script required TA intervention for student program. It complied and ran the program and compared it to the standard output and displaying the output with a fail/ pass result. If the test failed , it opened the souces files for the TA to review. And finally the TA would have large list of errors to chooses from (with optional comments), and predefined penalties. The final score was just written to a CSV text file (one for each TA).

        This cut down the time required by almost 80% . The correct programs were easy to grade (automatically) and most programs with 'standard'(expected) errors did not take too long either. Once a while some one would have weird errors and the TA's would have small challenge finding those. It was the most fair system I could think of and took care of most of the drudgery without being unfair (the only important caveat was to make sure that test cases for the tester script were solid and the students couldnt cheat their way out, by pre defined responses).
        • 7 TAs == huge class? ha! our CS2 class sometimes had 800 students (across all sections) with about 40 TAs. We also did all the stuff people are discussing: automatic instant feedback, security manager, trapping I/O and stuff like System.exit(). for non-instant feedback assignments, we had a graphical UI for assigning grades and adding comments for parts of the assignment based on test results run in the same VM. it would then email grade reports to the students and output codes to be copied and pasted
    • Funny Story (Score:2, Funny)

      by Cassini2 ( 956052 )

      We had a T.A. automatically grade assignments at the school where I worked. He wrote an automatic scanner to count the number of words in the comments, and to automatically run the programs and compare the output. He then assigned grades appropriately, computing the grade only on the number of words in the comments.

      He then quite proudly stood up in front of the class as explained to them what he had done. He had no idea what was about to happen. Essentially, everyone in the class simultaneously work

    • I see alot of people whining about automatic grading or how this guy is trying to avoid work. It doesn't sound like he's trying to avoid work at all! An automated testing script can be very beneficial to the students because it defines the assignment very strictly and will save them time and effort re-writing the front end of their program and focus on the real core concepts. Besides, with these huge classes that intro courses tend to have, the automated system will save him time organizing and, hopefull
    • Re:Don't do it. (Score:3, Informative)

      by CliffEmAll ( 794568 )
      As we try to teach our students, use the best tool for the job. In my opinion, that means only using automated testing when necessary. I am teaching a course (Programming in C & the UNIX Environment) the first time this summer. (I was fortunate enough to be granted a research assistantship when I was accepted to grad school, so I was never a grader.) I was offered the use of some automatic collection and grading scripts developed by someone in the department decades ago, but I decided to avoid them.
  • Do the work. (Score:4, Interesting)

    by mukund ( 163654 ) on Saturday July 22, 2006 @04:45PM (#15763923) Homepage
    I'd rather you manually grade it and provide valuable remarks to your students about their programs.

    If universities were all about automated stuff, students can very well learn from course textbooks such as those prescribed by by themselves. They go to university so that they can interact with their professors, get their amateur evaluated properly to shape their future work, and collaborate with their classmates.

    • Well, quite. Have a machine grade the programs, and you might as well have a machine write them.

      (Of course, you'd then need to write a program to write the programs...)

    • Thanks for reminding me of Saw it a while back when I simply didn't have the time and quickly forgot. Good stuff! :)
    • When I was a TA, one of the things that seemed to make the most difference for students was help them find the problems in their code on their own. Few of them came to talk to me about any comments I ever wrote on their submissions.

      If you expose your automated grading system so that anyone can run it (at least with some subset of the standard test data and expected results) you let every student check their grade as they go. You also make it more like the real world.... run your program against several d
  • Automatic grading apps are, uh, somewhat prone to breakage. I know at least one university that uses them found that certain students just wrote programs that did sneaky tricks rather than the assignment at hand (fork off a child process that sleeps for a bit then overwrites the grade file, then just output whatever). Perhaps more effort to do a one off program that way, but far less in the long term.

    Said students got to keep the grades, too...
    • Automatic grading apps are, uh, somewhat prone to breakage.

      Programs with bugs. What a fscking novel revelation!

      (fork off a child process that sleeps for a bit then overwrites the grade file, then just output whatever)

      What that means is that their security process sucks eggs.

    • fools!

      one of the benefits of Java: security! not only could they not do malicious things, we had defenses against DOS: time limit and non-instant execution. Instead, submissions were queued up and executed one at a time. also, if you refreshed the status web page faster than was automatic, you got demoted in the queue. :) typical wait time: 10 minutes or something. people submit way too often if there's no wait time. you need to get them to do a little thinking before they just tweak stuff aimlessly
  • Umm... (Score:3, Insightful)

    by Angst Badger ( 8636 ) on Saturday July 22, 2006 @04:49PM (#15763931)
    Are you sure it's a good idea to let students execute arbitrary code on your unattended machine?

    I mean, I know *I* would get ideas...
    • exactly, that was my first thought when reading the quesion. telling kids "log on to my box and run whatever code you came up with" sounds like recipe for disaster, and a potential huge liability:

      student's mom: "why doesn't johny have a grade for your class?

      you: "well, um, i had this idea about asignments, uh, and now i don't have anybody's work

      mom: "lawyer!!! i want a lawyer!!!

      just read through it, seriously, it'll much easier than unbreaking whatever the kids wind up doing to you

    • If you would "get ideas", then you wont be taking an introductory java course. Besides, it would be arbitrary java code, which makes it harder, and besides even all of this, if the submission system was set up right, you couldn't help but be on record for your attempt, and would then be busted severely.
    • Re:Umm... (Score:1, Insightful)

      by Anonymous Coward
      I've TA'd 3 Intro Java courses that use automated grading and I've never had anything bad happen by running the code. In fact, I've never ever heard of that happening at my school with anyone. It's not very common that kids in an introductory programming course have the know-how to destroy your computer when they can't even get a for-loop working the way they want.
    • Re:Umm... (Score:2, Interesting)

      by kuperman ( 7726 )

      Are you sure it's a good idea to let students execute arbitrary code on your unattended machine?

      From a security standpoint, that generally would be a bad thing to do. However, there are a few simple things that can be done to minimize the problem. In addition to the sandboxing mentioned elsewhere, you can create a normal user account that is the "grader" account. The TA/Instructor copies the source into that account and then can run it and will have no more privs than the student did -- so any badness

  • RoboTA. (Score:1, Funny)

    by Anonymous Coward
    "I'm thinking about scripting a site that will let students turn in their programs automatically and in some cases, run the program and (based on the output) automatically grade it. "

    Grade what? If they can automate it, then you're out of a job.
  • I had a class last fall in which the prof used an automatic grader. He had a couple page document describe all the nit-picky formatting requirements we had to follow so that his automatic grader could run it. And, if we missed even a signle req, we got docked for it (generally in multiples of 10% of the assignment). Many people scored below-average to failing on their first couple of assignments because of this system, even though their code was flawless. So, please, please don't use automatic graders. They
    • Formatting requirements are *not* nitpicky. Wait till you graduate and have to read and extend your coworker's code. Then you will understand why formatting standards are used.

      Students always complain about this stuff, but the bottom line is, if you're not comfortable paying attention to details, you should consider sociology or something.
      • Having formatting requirements is one thing; punishing students equally or worse for (for example) indenting incorrectly and writing broken code is another. In my opinion, errors that can be fixed by a well-known script or program (e.g. indent) with no harmful effect on the functioning of a program should under no circumstances be conflated with anything that leads to incorrect behaviour of the program. Both my superiors and my students agree with this sentiment.

        The last thing we need in the software busine
        • > In my opinion, errors that can be fixed by a well-known script or program (e.g. indent)

          Then why didn't you just run the script on your badly formatted code before you handed it in? Sounds to me like you lost a couple of points are trying to blame the grading scheme after the fact.

          • You're missing the point entirely. The grading is supposed to evaluate a student's skills and provide feedback for his development, not evaluate whether he can follow an arbitrary set of instructions that are mostly subjective in origin to begin with that can be performed by a reasonably simple program.

            As far as I can remember, I've only had one programming course where whitespace conventions were enforced and, even then, the effect on the grade was much less than the penalties mentioned here. Over here in
        • Having formatting requirements is one thing; punishing students equally or worse for (for example) indenting incorrectly and writing broken code is another.

          That's why you go through and apply weights to the various things you're scoring. "Consistent indentation" might be worth, oh, 3 points ( 3 = consistent indentation, 2 = minor lapses, 1 = made an effort, 0 = randomness ), while "accomplishes the task" might be worth 30 points.

          In my opinion, errors that can be fixed by a well-known script or pr

          • There seems to be some misunderstanding here.

            That's why you go through and apply weights to the various things you're scoring. "Consistent indentation" might be worth, oh, 3 points ( 3 = consistent indentation, 2 = minor lapses, 1 = made an effort, 0 = randomness ), while "accomplishes the task" might be worth 30 points.

            ... Which is exactly the point I was trying to make (and pretty much the way I grade, too).

            Finally, it's in the grader's best interest to look at readable -- read, well-formatted -- code. M

    • I disagree with your evaluation of the "importance" of following a specification. Specifications are often crucial to making programs interoperate, and if the specification says you can't have blank space at the beginning of the first line following such-and-such declaration, then having blank space there really does constitute an error in your program.

      Automatic graders are better for both students and teachers, because the teacher/TA can focus on the essence of your program, and let the machine worry abou
  • Suggestion ... (Score:4, Insightful)

    by Monkelectric ( 546685 ) <> on Saturday July 22, 2006 @04:51PM (#15763941)
    How about doing your job instead? Automated turn in is fine, automated grading is bullshit and a perversion of academic principles.
    • Re:Suggestion ... (Score:4, Interesting)

      by Dachannien ( 617929 ) on Saturday July 22, 2006 @05:10PM (#15764011)
      If part of the assignment requirement is that, upon being run, your program should produce a specific output, I don't see where there's a difference between having a human run the program and determine whether the output is correct, and having a script do the same thing.

      Sadly, automating any part of the grading process means you end up giving a lot less feedback to the students concerning their errors, but that's the prerogative of the instructor and assistants. In some classes, however, the student-to-TA ratio is so large that it would be impossible to grade every homework manually.

      Also keep in mind that some TAs don't actually get paid for their efforts - and those who do usually make peanuts. At some schools, and in some academic programs, being a TA is a required part of your graduate degree. And since they're working on their own degree, the TAs actually have a lot of their own work to do in addition to grading hundreds of homeworks every week.
      • Re:Suggestion ... (Score:3, Interesting)

        by Novus ( 182265 )
        Fully automatic grading breaks down quite badly if there is a possibility of students making small mistakes that cause large amounts of tests to fail (or, conversely, big mistakes that cause few tests to fail); in this sort of scenario (and concurrent programming is one of them), you really want a human to assign the final grade by identifying the underlying mistake/bug instead of the symptom/failure. However, having automated tests that are designed to expose common problems is a good thing.

        Of course, if y
        • Fully automatic grading breaks down quite badly if there is a possibility of students making small mistakes that cause large amounts of tests to fail (or, conversely, big mistakes that cause few tests to fail);
          It becomes less of a problem if you let students use the grading system. Write test cases and send them to students, they'll know in advance how many of them pass and will be able to modify their program appropriately.
          • Giving the students some of the tests does decrease the chances of irrelevant errors; however, you run the risk of students writing programs to fit the tests instead of the specs; in other words, their mistakes are going to be ones that your tests don't find. At a minimum, I'd keep the more complex tests to myself.
        • Exactly. And its also how students graduate writing all their programs as one function. Ive seen perfectly "correct" programs that were 3000 lines of if statements, that could have been accomplished with 200 lines of code.

          At my place of business, we just hired a junior engineer (over my sternious objections) who just graudated from college and asks questions like "How can I compare variables?" ... I have to think that automated grading is part of the problem that creates people like that :)

    • I always made the grades up off the top of my head at the end of the semester, and found it much more fitting with academic tradition and principals.

      hehe, jk.
    • The OPs "job" is to educate students not to work hard doing repetitive busy work. Automating repetitive work is working smart, not hard. Now the TAs should take the time saved and spend it working with students one-on-one.
  • no-no (Score:1, Insightful)

    by Anonymous Coward
    Automatic grading is unethical.
  • So... (Score:2, Insightful)

    by Anonymous Coward
    Let me get this straight. Professors no longer teach most classes, they're too important to teach something so basic I suppose, and have dumped that task on TAs. You, a TA, are now too lazy/unwilling to do this as well and instead are trying to farm it out to an automated process?

    • Laziness? Automation isn't laziness. There is no virtue in spending hours doing something that can easily be automated, and it's a shameful waste if the time could be used more productively. In this case, time spent verifying inputs and outputs could be better spent reading and critiquing student code, meeting with students to answer their questions, or maybe just not sitting around scanning over reams of student program output, making sure the right answers are hidden somewhere in there.

      Now if he'd aske
  • We have an automated submission system that requires the program to compile for subnission. It also has the capacity to run the program and check its output (command line stuff only). It also ensures that the work is turned in on time and that it contains all of the files required (unless of course we are given an assignment giving us free reign on the class structure etc.) Though, the TA does go through all of the code levaing coments where appropriate and those comments have been a massive help. Over
  • All my classes do it (Score:3, Interesting)

    by xWeston ( 577162 ) on Saturday July 22, 2006 @05:01PM (#15763981)
    Have any of the previous posters actually attended a large university in the computer science program?

    Every class I've taken at UCSD does some sort of automatic grading on the programming assignments. It would be impossible to grade everything otherwise... Last quarter we wrote a compiler that ended up being a few hundred KB of source. There were over 200 test cases ran by the autograder.

    All of the assignments are turned in from a unix prompt using a TURNIN command after prepping for the appropriate class.

    The class I'm working on now (Operating Systems, using NACHOS) even has autograder() methods in the skeleton of the code that are used during grading...

    I agree for a first class some feedback and hand grading might be necessary, but even with autograding you can add comments after looking at the code that causes similar test cases to fail.
    • "Last quarter we wrote a compiler that ended up being a few hundred KB of source."

      A few hundred KB of CUP/lex autogenerated source... c'mon, don't let people think we're reinventing the wheel over at UCSD. =P
  • Does your university have a subscription to one of those software scripts, such as WebCT? It's usually used to post lecture notes, assignments, and provides a bulletin board for questions. It also has a submission section (for WebCT) that allows students to submit their assignments by a certain time. Then you can download them all, each file separated into student accounts, and compile each one and grade.
    • You're either paid to shill for WebCT, or you must just really, really hate the guy who wrote the original question. As in "he killed my family and I'm out for revenge" hate.

      Why not just hunt the guy down, blind him, chop off a few extremities, and leave him in a ditch? It would be a kindness in comparison to WebCT.

      I haven't had the best experiences with their software.
  • Some amount of automation is helpful to avoid being swamped, but I don't think that a fully automated process can do the job. I require my students to generate an executable jar for submission, but also provide unit tests to them for the early assignments. Stage one of grading is to run a script which runs each of the jar files against the unit test. I then print the programs and mark them up for formatting and style, paying particular attention to those who ran afoul of the unit testing stage. Later in
  • by Vengie ( 533896 ) on Saturday July 22, 2006 @05:45PM (#15764108)
    Is very much automated-submission and grading -- or at least was from 00-04. Some professors do the grading by-hand (i.e. they run scripts by hand) but the majority of classes I took where you turned programs in, there were scripts that graded you that you could run part of yourself. (i.e. there were "public" and "private" scripts that used the same interface.)

    In a large part, it depends on how big your class is, and if you plan on continuing teaching. One professor, (whom I idolize) has been doing this long enough to have set up an extensive and all-inclusive framework. Can't go (too badly) wrong when you roll your own, but YMMV.
  • Are you doing graphical programming? If so, skip this.

    If your students are all writing CGI-style programs, maybe reading and writing a file or two, why not set up a Linux or FreeBSD box for them? Open ftp and ssh for them and require them to build their programs there. Your requirements for each class will be that they compile and run their programs on your server, do a run under script(1), then hand in all source, output, data and log files via a script you wrote called handin assignment [file...], wh

    • and then a kernel exploit will come out and your box will be fucked the day when all of the assignments are do for some major project. And poof, things are now pushed back. Oh, and they probably deleted the logs of them doing this as well. And the logs weren't remotely stored because we're talking about someone new to *nix / bsd.
      • Excellent point! We should also do all our classes via e-meetings because if you put all those people in a room together one of them might have a knife or a gun and start attacking all his classmates, and the attacker will get away with it because we're talking about someone new to hand-to-hand combat.

        Obsessing over the worst possible outcome is usually a waste of time. We're talking about a system where only students have accounts. And it's a beginning programming course, for crying out loud. In Java

  • you are getting paid to mark the papers, to identify problem areas and to solve those problems for the students, you have to provide feedback to them. if you want to be lazy and skip on doing that please dont ever accept a teaching responsibility, i had to teach several friends how to program in java since our lecturer was lazy and incompetant. somewhere someone will have to take the responsibility. work out how much those students pay per class and per evaluation and you will quickly realise you are trying
    • How is he supposed to identify your weaknesses and give your code individual attention when he has to spend all his time poking at the binary, making sure it gives the outputs it's supposed to? There's no reason to resent a teacher or TA trying to reduce the busywork associated with grading dozens of submissions. It's a simple fact of life that most all of us are trying to do too much with too few resources, and if you can free up that much instruction time by simply formatting your outputs properly, then
  • In COMP 314 [] (Rice University's [] sophomore-level programming & algorithms class, taught this past spring [] by Prof. Dan Wallach []) we TAs solved this problem with Subversion.

    An important part of real-world programming is teamwork; in 314 we embrace this and randomly sort students into groups of two for pair programming. The groups change for each project, and each project group gets a spot in a svn repository [] set up for the course. ACLs keep groups from peeking at one another's changes (for example, see this team list [], which is actually just a slice of our Subversion access control file). Students were required to tag their submissions for each of the project milestones: specification, prototype, final; this was how students submitted code [] for these deadlines. From timestamps we could easily see which groups incurred "slip hours" for late turnins.

    There were a number of reported incidences of lost work or conflicting changes which would have been disasters if not for svn, which saved their bacon. Groups that learned to check in early and often knew that accidental deletions or disk failures posed no threat to a successful project submission. A few enterprising teams even used tags and branches to help organize complex development efforts. In all, it was quite a successful adventure [] and we'll probably do something similar in the future.

  • Considering it's an intro class, I am sure lot of students will unintentionally end up with similar programs. []
  • I don't know about everyone else, but the instructors that I've had for programming will read through the code first before compiling and running it.

    There's 3 reasons for this. 1. To make sure you followed coding standards. 2. To make sure you weren't doing something destructive. 3. To make sure you were actually using the programming constructs that the instructor wanted you to.

    To expand on 3, for simple programs like you run into in an Introduction class, it's possible to do a recursion assignment

    • Of course, it's possible to run the source code or some suitable intermediate form (Java bytecode is very nice for this sort of thing) through a program that checks for illegal constructs (for example, for or do/while loops in your recursion example) and checks for adherence to coding conventions. Writing this program is left as an exercise for the student... I mean, assistant.

      As I already mentioned, a good sandbox (OS or VM level) should handle the security aspects.

      The real reason to read the code is that
  • UC Berkeley Does It (Score:3, Informative)

    by Josuah ( 26407 ) on Saturday July 22, 2006 @07:03PM (#15764337) Homepage
    Berkeley's been doing this sort of thing for years and years now. I'm sure someone there can help you identify software that can do what you're looking for. Unfortunately, I never TA'ed while attending, so I don't know what they use.
  • The Tufts Department of Computer Science use a homemade software called "provide" (Professor Alva Couch wrote it) It is a simple online grading utility. This program allows simple submission and online grading of electronic assignment submissions in Computer Science and Computer Engineering courses. It is specifically designed to be easy to use by graduate students working on their own, to leverage faculty resources and decrease faculty workload in managing grades. In production in EECS, not publicly distr
  • One of the grad students at my school developed this [] system as his masters project...

    Don't know if it's of any help but you might try contacting a Dr. Sun [] since he was the advisor.

    It doesn't automatically grade the homeword, but it does let you set time limits and since it uploads directly to the server, you won't have to deal with students saying they got the wrong email address.

    The system, currently, is only for homework and old code retrival but many other things are planned for it.
  • How about not being a lazy bastard and doing what you're paid for?
  • TRY (Score:2, Informative)

    by ragnarok ( 6947 )
    At RIT [] we used a program called "try", developed by one of the professors there. You can download it from his page: []
  • I am also a TA for an introductory course to JAVA. I am not sure how many others here are but I have found auto-marking to be essential. I am a single TA and I have 180 students. Each assignment has 3-4 substantial programs. Trying to mark them all by hand would be impossible given time limitations. I do have to agree with others though in that feedback is always nice. Therefore what I have done is I mark half of each assignment and the other half is automarked. I have automarking compare output given inpu

  • If you're TA'ing, you should have the Java-fu to do it. It's not hard. We used one for our classes, and we worked out a really nice compromise.

    When the program was submitted, it was compiled and trivially rejected if it failed to compile. This is worth the price of admission right here, since people whose code "compiles on their machine" get multiple chances to recognize the error of their ways. (Some people think students should get points taken off for this; that's a philisophical discussion I won't g
  • I don't know what your qualifications are to be a TA for this course. I was a TA for an intro to object oriented programming course using Java during my junior year. I also don't know what the difficulty level of your course is, but that should not change my opinion. 1. The students in this course will most likely put alot of time and effort into these assignments. 2. I would hope that YOU could implement a system like the one you described with an equivalent amount of effort. 3. I would hope that your
  • Stanford's CS 106a class is famous not only for being a popular class (1/3rd of all Stanford students take it) but for producing amazing TAs as well. When I took the coursea long time ago not only did the TA had grade a print out of your code, but they then scheduled a 15 one on one session with you to talk about what you did right and how you could improve. This sort of mentoring in the intro to programming course preped students for writing more complex code for other courses and meant that TA resources in later classes could concentrate on the topics at hand and not have to worry about spaghetti.

    The TAing program became so famous that Silicon Valley companies would recruit people that had been TAs and pay big bucks for someone that had been head TA.
  • 1) At least one student will either be lazy or make repeated mistakes trying to get their submission into the correct format. Because you are the TA and have no power, students will complain to you and demand that you make a special exception for their code. You'll find yourself making multiple variations to handle all the ways people screwed up your basic turn in instructions.

    2) Studying code from other people is yes time consuming, but very valuable for you as a student. You will see patterns of sol
  • I see multiple ideas here. First, automating of submission handling. This is a good thing. All it does is reduce errors and speed up grading.

    Second is automated testing. This is good for when you can test for the program working in general and also for specific errors you anticipate. However, this automated testing should NOT just be plugged in as a grade. Instead, a thorough review of the code should be performed. Not only does this help provide better feedback, but it helps you to spot cheating. Finally,
  • Purdue's CS department uses some sort of automated submission for their CS assignments. You submit your code, it actually runs algorithms on it to see if it is *too* similar to other students' assignments to help find cheating. Simple things like merely changing variable names will NOT make it through this system. I know a CS roommate of mine had to submit his homework with all source files and a makefile. The entire project had to compile and run in a certain environment to which the students had acces
  • Use CVS or other version-control system. Set it up so that each student gets their own module, and each student has access _only_ to their module [if individual assignments -- group assignments should naturally be set up for group access]. Then you can obtain and compile the code by performing a checkout rather than the student running a turnin program.

    Teach 'em how to use Make or Ant. Specify the directory structure and the name of the build-control file. This lets you automate the "checkout, compile,

  • Automarkers are the WORST thing in the world.

    First semester the automarker wouldn't accept unless the output was character perfect!

    Took off a mark each attempt too!

    Spending more time making sure . _ and ,'s are in the right place is the worst way to get students interested in computing.

Last yeer I kudn't spel Engineer. Now I are won.