I tried to see if it could code switch, so I typed 我也會一些中文. Even though it couldn’t code switch and plugged it straight into the US English model and generated weird gibberish, the mouth shape is pretty much accurate.
It’d be interesting to have a NLP-GAN with GPT-3 and other NLP models duking it out. It would be the Turing test but both participants would be artificial neural nets.
What’s the point? GPT-3 says that it will lie when lying is in its interest to do so. There is no way to know when he is lying… well, no simple ways to know.
I just used this with some old family photos.
Dear deceased relatives were already gone when cameras and videos became more common.
Seeing their faces moving and even smiling was definitely nice and even brought tears to older generation folks.
I used it on my own photos and colleagues photos. It looked like us but not us. Probably it would do a better job if it had photos from different angles to work off.
This is mostly about the evils of outsourcing/offshoring and sham contracting in the realm of Big Tech, but starting at 25:22 there’s a claim that much of what passes for AI now is actually invisible armies of humans.
It’s only a matter of time, it seems to me, before humans are compelled to take a sort of reverse Turing test, to prove to their AI overlords that their intellects meet some minimum standard. If they don’t it’ll be of to the pod-farm with them.