Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:Morons? Or Geniuses? (Score 2) 104

This is exactly it. I'm a Learning Scientist whose research focuses on assessment and I've been explaining to anyone who will listen that technology like ChatGPT is finally going to force our hand to reconsider what and how we measure learning. The problem is not the existence of ChatGPT but that too many educators rely on bad assessments. Now essays and short answer questions are just like any other type of assessment (e.g., multiple choice, simulation): good versions require deep understanding of the knowledge being assessed and bad versions are simply something that can gamed (e.g., getting someone or something to write the essay for you, getting the multiple choice answers from someone else).

But I will also say that it's not just about the US. Believing that tests are infallible ways to measure knowledge or skill (i.e., high stakes testing) is a world-wide belief.

Comment Re: Education Research already tells us the answer (Score 2) 58

That's a fair question (want to be an education researcher?).
The key challenge of determining the 'correctness' of an assessment is to figure if the learner is in one of four states:
A. They got the problem right and they understand what the problem is measuring.
B. They got the problem wrong and they don't understand what the problem is measuring.
C. They got the problem wrong and they understand what the problem is measuring.
D. They got the problem right and they don't understand what the problem is measuring.

A and B are what you hope for but don't always happen. Instead, C and D often happen and are when learning gets messed up. C increases the frustration of the learner, and D can result in a learner having a major setback later.

AI is, IMHO, pretty good at determining A and B scenarios, but fails (sometimes spectacularly) when it's a C and D situation.

So then the follow up questions are:
1. Does the AI flag tell the instructor WHY the student got the problem wrong? If it doesn't then why is this any different than just comparing the code to a 'right' answer.
2. Does the student understand the difference between what they got wrong and what they got right? Does the AI help explain that difference?
3. What knowledge was necessary to have gotten the problem 'right', and can the AI verify if the learner has that knowledge?

Comment Education Research already tells us the answer! (Score 5, Insightful) 58

As an education researcher (more specifically, a Learning Scientist) who does research on assessment, I already have high confidence in what will be the potential value/impact. [drumroll, please]:

For some homework assignments, that have limited ability to help students, this could work. But for many students, this won't be valuable.

To be more specific, this could be a valuable effort as long as the feedback from the homework is what students need. Will the automated grading tell the learner why they got the answer wrong? Or will it just point out that they made a mistake? In relation to your own learning, think about how often you learn when someone tells you that got something wrong. Did that help? Or even further, think of the times you got something wrong and then someone showed you how to do it the 'right' way. Did that help? I bet the answer is that it did help sometimes, and then other times it wasn't really valuable since you needed to develop a better understanding of what you weren't understanding.

The funny thing is that people (even educators) often forget the value of assessments, including homework. They only think of assessments as summarize, letting the learner (i.e., student) and instructor (i.e., teacher) whether someone knows something or not. But, at most, that's about 50% of the value of assessment. The other factor is formative, or whether the assessment (including homework) help the learner understand what, if anything, is preventing them from understanding (mastering the skill, using the knowledge, etc.)

Comment $50M is Nothing (Score 3) 25

[I am a former Social Studies Teacher, School Technology Coordinator, and currently a Professor who studies Education in the US]

What's particularly crazy to me is that corporations like Amazon and their associated foundations all think that they can have an influence on our public education systems for incredibly small amounts of money. $50 million dollars is NOTHING at a national scale. The city of Buffalo, near where I live, has an annual school budget of over $900 million. That's just one city. And it's not even a particularly large school system (it doesn't include the suburbs surrounding Buffalo).

None of these corporations and their foundations understand the amount of money and scope of work required to educate children. Or they think they have a truly unique, transformational idea that will change education (they don't), so the money can be small. Or they think that school systems waste too much money (and they can and do waste money, but not at the scale that $900 million budget can be shrunk to less than $50 million!!!).

Comment It's not peer-reviewed! (Score 1) 312

I'd just like to point out that the paper hasn't even been peer-reviewed yet. Even in social sciences, and especially in the Learning Sciences, we use peer-review as an important filter for research that might not be conclusive.
Further, this data set is interviews, observations, and surveys. I'm not saying that their conclusions are unwarranted, but I sure as hell would want additional research before I go with a headline about how math homework is bad for certain populations. For example, maybe it's not all homework but just certain types. Or maybe the homework is a proxy for other phenomenon, as is suggested in some of the discussion threads. More research is necessary, and it needs to be peer-reviewed!

Comment The Journal Editors deserve equal blame! (Score 1) 153

If these allegations are true, then forever shame on the Annals of Internal Medicine. The entire editorial board and reviewers should be sacked and anyone directly involved with any reviewing of the paper should be black-listed from every participating in any other academic journal. The only value that an academic journal provides is acting as a filter to prevent shoddy research from seeing the light of day.

Comment Definitiely Unethical (Score 1) 190

I can't speak to legality of the researcher's actions, but as a Social Scientist (cue jokes about not being a real scientist), I can tell you that their actions were unethical. Specifically, I'm shocked that their Internal Review Board (IRB) thought it was ok to upload this data to a forum where all can have access.

Social Scientists, when conducting research, are under a moral obligation to make sure that their participants are not under more than 'minimal risk' as a result of the research. The most common heuristic for that minimal risk is whether the researchers are making the participants susceptible to more risk than they would normally be susceptible to. In this case, while the participants had provided data to a semi-public forum (i.e. OkCupid), make the data more easy to extract and able to be mined is definitely putting the participants at higher risk for data related crimes (e.g., identity theft, bank fraud).

If those researchers aren't in proverbial hot water yet with their institutions, they will be when the law suits come. The lesson to be learned here if you are a researcher....your IRB exists for a reason; check with them before creating a new protocol.

Comment Re:ask Shatner who gets credit (Score 1) 218

[Sarcasm On] Now that's some clear logic. You must studied a lot of Math to know that we should give credit to one person who has not a shred of empirical evidence to suggest that his approach has led to positive learning outcomes. Let's keep to anecdotal claims - that will surely help us to understand how kids learn Math better. I'll even have a go:
From my experience with kids of this generation, there's one teacher who's responsible for most of the positive increase in mathematical competency in recent years: The Flying Spaghetti Monster.
I'm sure you'll find any number of politicians and their cronies at the textbook corporations who will claim credit, but when they mess everything up and the children find themselves mystified and befuddled, they turn to the Flying Spaghetti Monster for help.

Comment Re:Al-Jazeera USA was doing some shady things (Score 1) 276

Nope!
While 'Semites' are people with a middle-eastern language, 'Anti-Semitism', as defined in ALL dictionaries, is prejudice against Jews.
On a side note, I find it ironic that people who hate Jews will often include arguments that the term anti-semitism should not exclusively mean prejudice against Jews. They hate Jews so much that they don't even want to allow Jews a term to label that hatred!

Comment Re:You mean parents? (Score 1) 150

If you are interested, the data shows that parental involvement isn't all that big of a factor in determining learning gains. http://visible-learning.org/ha... Which kind of makes sense since their are plenty of individuals who achieve large learning gains despite having terrible parents. But I'll be the first to note that parental involvement is likely super important in an indirect way.

Slashdot Top Deals

Reference the NULL within NULL, it is the gateway to all wizardry.

Working...