Handling student assignments possibly written with artificial intelligence / AI

I’m teaching 2 undergraduate courses this semester, very different courses. For one, ChatGPT gave perfect responses to the assignment prompts. For the other, it gave something that superficially looked great but actually was barely mediocre.

4 Likes

That’s probably because GPT is a language model designed to predict one character at a time. It isn’t designed with logical analysis or abstract mathematics in mind. Whether or not those capability can be formed inside the model just by reading a large corpus is still up for debate. Maybe if there is a adversarial network that checks the logic of GPT’s output, then together that would make an even stronger network?

Regardless, GPT actually did great in the math trick question in my example, it knew the right answer, just didn’t outright say my question is ridiculous.

1 Like

It went through a long calculation for me and told me 1/2 cup is to 1/2 a cup as 2 cups is to 1 1/2 cups :slight_smile: It did get it right in the end.

1 Like

So maybe it answers exams more like me. Memorized the final answer, and just has to make up the parts in the middle as it goes. It is actually me, it would make the writing for the middle part extremely difficult to read.

2 Likes

It didn’t do well with a combinatorics question that I asked it too, even when corrected on different parts of its workings, it consistently gave me the wrong answer.

1 Like

When I was a child, my school didn’t allow the use of calculators before a certain grade, so how did they enforce that when it came to homework? Well, they couldn’t. And they didn’t.

But it didn’t matter. Homework assignments didn’t count toward our final grade. They were merely practice for our exams. If we cheated in our homework assignments, it provided us with no benefit other than rob us of the opportunity to practice for our in-class final exams.

In any case, I suppose that the future of education (at least in the higher grades) will be more about learning how to write using AI rather than learning how to write on your own, in the same way that students currently learn how to use a calculator rather than how to solve problems in their heads.

2 Likes

Our math textbooks usually had the answers in the back (or the odd or even numbered ones). The point was never the final answer — you didn’t show your work and the problem was marked wrong, even if you had the correct answer. The point was the “process”, not the answer. I know on NY State the standardized tests require students to show their work (a teacher of some 40+ years told me)

To flip that over to college essays, go back to writing by hand if professors are worried about “cheating”.

1 Like

Yes, they can get partial credit if their mistakes are made correctly. Yay NY!

I think the point was that “someone” was going back and erasing student wrong answers so they would be correct. If the students have to show their work, it’s going to look pretty suspicious that there’s correct answers when the work shows they have no idea what they’re doing. Yey standardized tests!

That happened down south a few years back. Either or. I’m not arguing the whys, just stating the facts.

Well, Chinese parents were up in arms when the CCP tried to take measures to prevent cheating on the Gaokao, so I’d assume the desire to cheat on standardized tests is pretty obviously high. (Maybe I’m in the wrong thread. Someone had just commented on Chinese math test scores = only Shanghai somewhere. I should add that Chinese test scores come from openly embraced and publicized cheating, while US test scores have severe ramifications in the event of cheating)

1 Like

The stakes are so high on the gaokao, that it would be weird if you didn’t think about cheating even a little bit. It’s what 100% of college admission is based on, reach a threshold score, you get in, one point short, and you don’t.

At UT Austin the entire math department does not allow calculators in any of its exams. That means the questions has to be written with that in mind. Recently they have a system called “Quest” that all homework and exams are based on, and it’s a multiple choice question where if you enter an incorrect answer you lose points. Get a question wrong enough times (allowable tries are basically number of choices minus one, this means if it’s a true and false question you get one chance) and you get negative points. The department has a policy of dropping the three lowest homework grades by the way. Obviously for exams you get one try, but you mark your answer on some scantron sheet and the answer entered comes from that, so you get one try for each question. There’s no possible way to show your work in Quest based exams, as they only care what answer you entered for a given question, not how you get there. That means a simple mistake can be very costly with no chance of getting partial credit for that. (You are certainly free to dispute answers if you know what choice you entered and somehow the scantron machine read it wrong).

The calculator ban only applies to the math department though, the physics department allows calculators during tests, but not graphing calculators.

I used to think SAT/ACT was hard but reading about gaokao really makes my hair stand up.

We seem to have found a model about how not to use ChatGPT to handle student assignments:

Guy

2 Likes

I read about this guy. I think he must have popped a gasket or something.

I suspect he may be looking for work somewhere else, soon.

Guy

1 Like

A post was merged into an existing topic: AI chatbots run amok

A bit off-topic, but I was searching for some Taiwan news and happened upon this site, which seems to feature only AI-generated articles, like this one: Taiwan president vows to keep status quo of peace and stability – ThePrint – ReutersFeed .

Somewhat alarming is the disclaimer at the bottom of the post: “Disclaimer: This report is auto generated from the Reuters news service. ThePrint holds no responsibilty for its content.

No responsibility? The web site looks like a legitimate news source, with plausible-sounding content, but the site takes no responsibility for the content? “Fake news” is really going to be problem going forward, as AI-generated articles serve as the basis for further AI-generated articles, all with the disclaimer that nobody takes any responsibility for the correctness of the content.

2 Likes

I need to seriously rethink some of my classes for next year. A course in presentations, a course in writing research essays … both now very broken by ChatGPT.

I’m probably going to do a lot more in-class or exercise-based assignments next year: write a paragraph in the next hour citing this article that I’m giving you right now; “Here’s the article; now make a speech about it.” That sort of thing. But these courses are supposed to be built around longer research essays or longer presentations about researched topics, and I don’t know how to handle those tasks going forward.

One possibility is giving them very specific topics to work with, and perhaps provide them with the bibliography they need to use, but ugh … that’s so much more work for me, and it also limits their own creativity and freedom to research something that interests them.

Another option is making sure they show me their work at different stages - outlines and bibliographies early on (similar to what @TT posted upthread) - but I’m not sure how much that’d really help. It wouldn’t be hard to generate those backwards from whatever finished product ChatGPT has already given them.

Tangentially: I may insist on all phones in bags at all times (my usual rule is don’t use them when someone’s talking). Last week I had one student using ChatGPT on her phone to prepare an impromptu one-minute speech! I didn’t realize we’d reached the stage of using bots for even rather basic conversation exercises.

3 Likes

A presentation course should focus on presentation, not content, don’t you think?

For writing, there are now Chat GPT detection AIs, but I haven’t used any. Not sure if I’d want the extra step, in addition to Turnitin, and actually reading/marking, since a zero tolerance for plagiarism policy would not go down well with students or admin and anyways there are ways around such as patchwork paraprasing something mostly written by ChatGPT, with some machine translation and copypasta as well

For my freshman comp, this worked fairly well

As above: For presentation, I’d grade them heavily on performance. For longer research essays, maybe process oriented grade. A short version: Marks for collecting sources and writing summaries. Marks for outlining. Marks for inperfect first draft. Marks for submitting a perfect final draft (that was corrected by ChatGPT).

Lol, i already suggested this!

About reverse engineering, you’re right. About the only way to avoid it is to make them do every step in class, no phones

I was letting them use phones to check facts or translate words, but they were still copying sentences even when i started giving 0s when i caught them. I’m undecided about taking phones away altogether, but there aren’t good options here

1 Like

I mentioned to my friend/coworker how I can tell the students aren’t actually doing their sentences. They mentioned it’s actually OK because they are technically learning that new sentence. I’ve kind of adapted this line of thought because Asian countries are known for cheating. Now if it was American students? I would talk to them or make it so they use school laptops.

2 Likes