A Depressing World of AI Taking to AI?

I received a request for a PhD over the weekend, and you could tell a mile off that it was written by Chat-GPT (I have missed out on some…

A Depressing World of AI Talking to AI?

I received a request for a PhD over the weekend, and you could tell a mile off that it was written by Chat-GPT (I have missed out on some of the details):

— — —

Dear Bill Buchanan

I trust this email finds you well. My name is [Your Full Name], and I am writing to express my keen interest in pursuing a PhD in Cybersecurity under your esteemed guidance.

… some details missed out

Your distinguished research and academic accomplishments in the field of Cybersecurity have greatly inspired me. I believe that your mentorship will play a pivotal role in ensuring a meaningful and impactful journey throughout my PhD.

I am particularly intrigued by your work in Critical Infrastructure Protection, Electronic Health Record (EHR) Privacy, and Security, and I am eager to contribute to the ongoing advancements in these areas. I would be grateful for the opportunity to discuss potential PhD openings under your supervision and explore how my academic background and research interests align with your ongoing projects.

Please let me know if there is a convenient time for you to meet or if you require any additional materials. Thank you for considering my inquiry, and I am enthusiastic about the prospect of working with you.

— — —

All the telltale signs are there, and the candidate has not even bothered to take out the “[Your Full Name]” part. Overall, it is written in the formal and flowery way that ChatGPT thinks we should be using for a template, and then it gets us to fill in the blanks. Unfortunately, Chat-GPT also has hallucinations, especially when it comes to pin-pointing people. For me, ChatGPT generally things that I do Critical Infrastructure Protection, and which is not quite true. The targeting of me in Chat-GPT has not quite worked in this case.

The worry here is that we are moving into a world of AI talking with AI, and where by ChatGPT agent could have responded with:

In asking for more details, and then with:

What a depressing world would that be, and where we just leave all our communications up to AI bots who just parse replies. And, in the future we could even train them on us, and where they would know the language and that we would use. Basically, it’s a step up from the automated replies that we often get, but generally more focused.

Overall, the rise of Gen.AI is a worry for academia, and for a good deal of the time we will not know if we are reading something generated from a bot, or from an actual human. I have already detected it in student reports and even in closed-book tests, and where students forget — or are too lazy — to rephase things. I worry most about our next generation, and where the gathering of knowledge will be based on the generation of search queries. We will then become a slave to the machine, and our amazing gift of intellect will fade into the past.