Internet round-up: the Psmiths on class; Harper’s on Oklahoma universities; Leiter on ChatGPT
My favorite Substackers have reviewed Paul Fussell’s Class and applied its principles to today’s political landscape (and other things).
I get the vibe they’d read Class before.
If you haven’t read Class, you really ought to.
♦ ♦ ♦ ♦ ♦
From Harper’s’s “Weekly Review”:
Just one more example of politicians trying to control what colleges say.
Kirk may have debated on campuses, but he wasn’t a faculty member or even a degree earner. And his work wasn’t scholarly. It didn’t try to adhere to the standards of any guild of experts.
I’d hope that no professional academic would wish to flaunt him as a symbol of what colleges and universities do.
Then again, a lot of schools are happy to put up statues of their football players. The state doesn’t even have to enforce that.
♦ ♦ ♦ ♦ ♦
Brian Leiter posts about how a colleague of his got a chatbot to write an “alarmingly competent” philosophical essay.
“How much trouble are we [academic philosophers] in?” Leiter asks.
I’ve never seen any undergraduate writing with the chatbot’s precise style, but (*shudder*) I’ve seen lots of PhD- and journal-level prose just like it.
So, yes, we philosophers – or, at least, those who aspire to a livelihood based on the production and evaluation of scholarship – are in big, big trouble. Because, with just a little input, robots can do those tasks now (or, if not now, soon). Not superlatively well, but well enough to impress the profession’s gatekeepers.
Worse: readers of philosophy are in trouble, and have been for some time, because so much scholarship makes the grade even though it sounds like it rolled off a conveyor belt. The prose is undistinguished, and stock “-isms” (contractualism! particularism!) are opposed or combined almost mechanically.
I get the vibe they’d read Class before.
If you haven’t read Class, you really ought to.
♦ ♦ ♦ ♦ ♦
From Harper’s’s “Weekly Review”:
Lawmakers in Oklahoma introduced a bill mandating that every state college erect a statue of [Charlie] Kirk in a “highly visible and easily accessible” plaza that bears the activist’s name.The bill is here.
Just one more example of politicians trying to control what colleges say.
Kirk may have debated on campuses, but he wasn’t a faculty member or even a degree earner. And his work wasn’t scholarly. It didn’t try to adhere to the standards of any guild of experts.
I’d hope that no professional academic would wish to flaunt him as a symbol of what colleges and universities do.
Then again, a lot of schools are happy to put up statues of their football players. The state doesn’t even have to enforce that.
♦ ♦ ♦ ♦ ♦
Brian Leiter posts about how a colleague of his got a chatbot to write an “alarmingly competent” philosophical essay.
“How much trouble are we [academic philosophers] in?” Leiter asks.
I’ve never seen any undergraduate writing with the chatbot’s precise style, but (*shudder*) I’ve seen lots of PhD- and journal-level prose just like it.
So, yes, we philosophers – or, at least, those who aspire to a livelihood based on the production and evaluation of scholarship – are in big, big trouble. Because, with just a little input, robots can do those tasks now (or, if not now, soon). Not superlatively well, but well enough to impress the profession’s gatekeepers.
Worse: readers of philosophy are in trouble, and have been for some time, because so much scholarship makes the grade even though it sounds like it rolled off a conveyor belt. The prose is undistinguished, and stock “-isms” (contractualism! particularism!) are opposed or combined almost mechanically.