An interesting article by a frustrated lecturer:
thewalrus.ca/i-used-to-teach-students-now-i-catch-chatgpt-cheats
He made reference to cheats only hurting themselves. I disagree. There seems to be good money had by being a career BS artist. I've worked with colleagues 20 years who bumble though their careers knowing SFA, but able lie/cheat/fake and take credit for other colleague's work. They rise the ranks no worries.
There seems to be good money had by being a career BS artist. I've worked with colleagues 20 years who bumble though their careers knowing SFA, but able lie/cheat/fake and take credit for other colleague's work. They rise the ranks no worries.
This has really annoyed me over the years too. Why do you think it works out like that? I have seen it so much that I no longer think its a one off accident.
My impression is that people see the confident person and assume that they are competent. If you are instead a bit more cautious because you can't know it all, then people sometimes see this is unconfident and therefore that you are not competent. Which sort of sucks as you end up with an organisation of back-biters who just look competent to others but are otherwise poor at their jobs.
Managerial incompetents! That's the first thing, they should Know how to pick real talent instead of BS artists.
The other thing that annoys me, if somebody is stuffing it up on the coal face, they promote them upwards, "out of the way".
So they get a payrise for a nothing job, or they end up in a real job and stuff even more things up.
I'm finding AI a very useful teaching tool. It is very good in calculating complex mathematical problems, that would normally take hours using spreadsheets. I'm finding it very useful with regards to learning Australian superannuation rules while remaining tax effective. It also does a pretty good job writing snippets of computer code and even finding the best approach on how to solve all sorts of complex everyday problems. In many cases far quicker and better info than relying on google.
I have no doubt that AI will totally change many occupations. It wouldn't surprise me if the author of the link you provided used AI to help him write the article.
Managerial incompetents! That's the first thing, they should Know how to pick real talent instead of BS artists.
The other thing that annoys me, if somebody is stuffing it up on the coal face, they promote them upwards, "out of the way".
So they get a payrise for a nothing job, or they end up in a real job and stuff even more things up.
Yep, I'm afraid you're too right there! It does appear that the most incompetent people find their way to the top, (except in a business where the boss has actually created and built up the business himself.)
I think because those people have exceptional marketing skills when selling themselves, but that doesn't equate to being a good manager. And 'unfair dismissal' rules make it virtually impossible to demote or fire anyone who subsequently proves to be useless.
I have no doubt that AI will totally change many occupations. It wouldn't surprise me if the author of the link you provided used AI to help him write the article.
That would be very hypocritical if he did that !
I totally agree. LLMs are here to stay now, and it's changed things. It sort of reminds me of the Dictionary we all used. When writing job applications, a spelling mistake or poor grammar would result in the application getting discarded. Then spell checkers in word processors arrived. LLMs are a similar in effect IMO; the way it blunted a particular mental skill. But, I know the area of my brain that was good at looking up words in a book has degraded. I suspect a similar neurological process has happening to people who heavily use LLMs.
Australian Tax law... I read that once. Yep, I'm asking ChatGPT next time !! ![]()
Australian Tax law... I read that once. Yep, I'm asking ChatGPT next time !! ![]()
surprisingly DeepSeek has a very good understanding of Australian tax and superannuation laws, up until 2023.
Grammarly has plagiarism and generative AI detection. Its good for plagiarism but not yet perfect for AI, but it can highlight overwhelming generative AI usage.
Universities are still figuring out what is an allowable use of AI and it varies by discipline. Unfortunately ChatGPT will create fake references by taking a title from one paper, authors from another, and journal name/volume/issue/pp from a third. Also, there are certain words like "nuanced" that generative AI overuses, which is sometimes a give away.
Dealing with ignorant people who lack understanding often requires a nuanced approach, as they may not yet grasp the complexity of the issues at hand. It's important to recognize that their limited perspective exists within a confined realm of knowledge, one that may not have been exposed to diverse viewpoints. Rather than dismissing them outright, it can be more productive to delve into conversations with patience, helping them explore new ideas and fostering a deeper understanding. By carefully guiding them through different perspectives, it's possible to gently broaden their awareness and move beyond their initial ignorance.
Fair point. The real remery wouldn't take a nuanced approach.
He would just tell them they are idiots.
I wonder... if i could customize it to be like.... carantoc
First address the nuances then it may be possible, although it is nuanced.
dont worry i think the thumbs up were all just people excited that i said i would leave seabreeze ![]()
Dealing with ignorant people who lack understanding often requires a nuanced approach, as they may not yet grasp the complexity of the issues at hand. It's important to recognize that their limited perspective exists within a confined realm of knowledge, one that may not have been exposed to diverse viewpoints. Rather than dismissing them outright, it can be more productive to delve into conversations with patience, helping them explore new ideas and fostering a deeper understanding. By carefully guiding them through different perspectives, it's possible to gently broaden their awareness and move beyond their initial ignorance.
remery i see your therapy sessions are paying dividends congratulations
"Glynn has since found hundreds more papers with hallmarks of AI use - including some containing subtler signs, such as the words, "Certainly, here are", another phrase typical of AI chatbots. He created an online tracker, Academ-AI, to log these cases - and has more than 700 papers listed. In an analysis of the first 500 papers flagged, released as a preprint in November, Glynn found that 13% of these articles appeared in journals belonging to large publishers, such as Elsevier, Springer Nature and MDPI. Artur Strzelecki, a researcher at the University of Economics in Katowice, Poland, has also gathered examples of undisclosed AI use in papers, focusing on reputable journals. In a study published in December, he identified 64 papers that were published in journals categorized by the Scopus academic database as being in the top quartile for their field3. "These are places where we'd expect good work from editors and decent reviews," Strzelecki says."
www.nature.com/articles/d41586-025-01180-2