Handling student assignments possibly written with artificial intelligence / AI

For an argumentative essay, yes. When teaching composition to first year ELLs that usually isn’t the only course objective

Hence paragraph writing in class. Any content instruction that needs to be done can be flipped onto the LMS

That’s too many, but depending on the course and their level the colleague could leverage the technology in their favor (e.g. CALL quizzes with closed answers), or just have sentence level writing

1 Like

:grimacing:

Guy

1 Like

I was wondering whether they’re trying to get rid of him.

2 Likes

We only have STEM PhD students, but this is obviously a concern for us too. 80% are non-native English speakers, so the temptation will be high for class assignments of essay format, or even coding projects.

It is harder to get an AI to write a grant proposal that makes sense, or a thesis. But I see the day when that will come too. Just hand the bot the data sets, get the bot to analyse them, refine your hypothesis (or suggest one de novo), produce figures, discussion, abstract. Voila! Instant thesis.

I can see how that would make a lot of the production side of science easier in a way, to be honest. AI is already being used to do actual science too.

definitely the disruptor of this century.

8 Likes

Yeah… but the goal of the course I’m teaching is supposed to be academic research essays.

1 Like

In that case, i might structure the course around the process

First assignment, writing the research question 5%

Second assignment, collecting bibliography and writing reference list, 5%

Third, reading and taking notes, 5%

Fourth, outlining, 5%

You get the idea

2 Likes

A good plan.

I’d give more weight however to 1) and 2). At least that’s what my AI Bot tells me I should be doing. :upside_down_face:

Guy

3 Likes

:laughing:

1 Like

This article seems to belong in this thread

https://archive.ph/dr1m4

7 Likes

Such a great quote from that article

1 Like

Didn’t read the whole thread, but that problem’s always existed. I used to look up what textbook the professors from my grad classes took their homework questions from, then found a pirated copy of the solution manual on google. Some people have another student do their homework against a fee. You just have to accept that there’s a level of cheating happening in academia, and you’ll never get around it.

3 Likes


Playing dumb at the highest levels.

1 Like

'Point in case, here is an unedited example:
Query: “Write a 200 word description of a prize-winning photo showing a person holding a rose. Make it sad.”

ChatGPT:
"The photo depicts a lone figure standing in a desolate field, the bleak landscape stretching out behind them as far as the eye can see. They are holding a single rose, its petals withered and wilted, a symbol of their own broken heart. The person’s face is shrouded in shadows, but the pain etched onto their features is palpable. They stand there, alone and forsaken, their eyes fixed on the rose as if it is the only thing left in the world that holds any meaning for them.

The photo is a poignant reminder of the fragility of love and the way that it can wither and die just like the rose. It is a haunting image, one that lingers long after the viewer has looked away, leaving a sense of sadness and loss in its wake. The photo was a prize-winner, not because of its technical excellence, but because of the raw emotion it captures and the way it speaks to the viewer’s own experiences of heartbreak and loss. It is a photo that touches the soul and reminds us all of the power of love, even in the darkest of times."’

The level of detail for topics relates to Taiwan isn’t there yet. A drawback of depending on online English sources.

1 Like

You can thank Microsoft for taking over OpenAI. The first thing they did was removing the whole ethics team and the first reiteration of GPT (now v4) under Microsoft is aimed at scoring well on all kind of test for humans (Bar Exam, LSAT, SAT, USABO, AP).

We’ve created GPT-4, the latest milestone in OpenAI’s effort in scaling up deep learning. GPT-4 is a large multimodal model (accepting image and text inputs, emitting text outputs) that, while less capable than humans in many real-world scenarios, exhibits human-level performance on various professional and academic benchmarks.
GPT-4

Microsoft optimized it for being better at cheating! :rofl:

https://www.youtube.com/watch?v=TxkJMX0KyS0

https://www.youtube.com/watch?v=oc6RV5c1yd0

3 Likes

In my recent testings, I realized one way to find out if students are cheating is to give a couple of questions that is phrased wrong. For example, prove that the sum of a triangle’s inner angles is 360 degrees.

ChatGPT will try really really hard to satisfy your statement, to the point that it bends over backwards to not call you wrong in your first prompt.


So despite knowing the real answer, ChatGPT scrambled its artificial neurons and wraps math to support the totally wrong assumption in the question.

I gave a math example, so ChatGPT was able to give an answer that if the student spent a minute reading the answer that he/she might figure out it was a trick question. However, if it is a reading comprehension sort of trick question, then most likely a student who cheats with ChatGPT will not be able to figure that out.

3 Likes

God forbid teachers need to go back to talking to their students face to face! Lord knows the students paid full price for such luxury :innocent:

1 Like

Whoa, this is surprising to me, because I asked Bing to explain to me examples in a textbook because it’s math I haven’t used in over 10 years, and it told me “I am not a tutor and I cannot help you with your homework”. And even though I told Bing I already passed that course, it didn’t give me an answer and just asked me about what I was studying now.

1 Like

It’s not great at math problems, I’ve noticed. It may be that it doesn’t have a strong self evaluation ability. It just spits out what it comes up with. When we do math well we’re constantly checking our output for validity, I think.

2 Likes