Want to keep AI from stealing the farm along with your job?
Stop using Chat GTP to write your term papers and become a "high agency" entrepreneur
Everyone is obsessively buzzing these days about artificial intelligence, or AI.
I’m wondering whether it’ even time to think about revising the familiar refrain of the classic children’s nursery rhyme “Ol’ McDonald” from E-I-E-I-O to A-I A-I-O.
And maybe now instead of celebrating Ol’ Mac we’ll start with “Ol’ Elon”.
“And on this (server) farm he had some algos…er…rithms.”
A-I-A-I. Oh?
“With a grok, grok here, and a grok, grok there. Here a grok, there a grok, everywhere a grok, grok”.
Aaaaaaaayyyyyy!
Am I supposed to be excited or paranoid about AI? It’s a tough one.
As a college prof I don’t worry (yet) about my job being replaced by AI, mainly because I’m a philosopher, and Large Language Models, or LLMs, can’t really at this point solve the “Liar Paradox”.
At the same time, my subject expertise doesn’t necessarily lend itself to testing students through multiple choice exams, only expository essays. And I suspect the vast majority of my students are gleefully deploying Chat GTP, Meta, Rytr, Perfect Essay Writer, Essay Flow, etc.
The last one by the way is specifically designed to bypass AI detectors. These are nowadays marketed routinely to tweedy chumps like me desperate to “catch” students whose “all-nighters” don’t take place in the library or the dorm room any more, but around the video game console or under the strobe lights at the bistro while the “machine” spits out in less than a minute a very impressive, garishly footnoted, 20-page term paper on “Foucault’s Likely Take on Queering the Sociology of Moral Contagion in an Age of Authoritarianism”.
It's the lightning-fast pace of AI tools and platforms as substitutes for real learning that drives me to my own all-nighters worrying that the bright young thing in my class on “Ethical Aporias in an Age of Insipidity” might screw up that interview for her first job as a moral-contagion analyst at some large, international company.
But, then again, a recent article in the Wall Street Journal convinced me neither she, nor I, should worry, since AI is going to eliminate all entry-level jobs for moral-contagion analysts anyway.
Or at minimum moral-contagion analysts won’t make six figures right out of college from now on.
Well, as my Trump-loving, biker wannabe, third step-cousin might huff, “she can always join the military”.
Perhaps not. Another op-ed on the same page of the Journal predicts that future wars will be fought with less humans and more “smart” weapons that don’t require that we shout “bombs away”.
Of course, we’ll always need people to decide to go to war. But AI supposedly can serve up all the data they need to make “informed” decisions, we are told.
But war-making is not merely a matter of slurping up all the right data. It relies on making profound moral judgments as well.
AI perhaps can more precisely estimate a casualty count, but it can’t ascertain whether those casualties are necessary in the first place.
To render the question a little less abstract I asked Meta’s version of AI the following question: “How would Kant's Critique of Practical Reason help the President of the United States decide whether American should defend Taiwan if China invaded?”
Here is the actual answer I received:
Immanuel Kant's Critique of Practical Reason focuses on moral philosophy, particularly the concept of the categorical imperative. To apply Kant's ideas to this scenario, the President would need to consider whether defending Taiwan would align with universal moral laws. Here's a possible thought process:
1. Universal applicability: The President would ask whether the principle of defending Taiwan would be universally applicable. Would it be morally justifiable for all nations to defend any territory facing invasion?
2. Respect for autonomy: Kant emphasizes treating individuals as ends in themselves, rather than means to an end. The President might consider whether defending Taiwan respects the autonomy and self-determination of both Taiwanese and Chinese peoples.
3. Duty vs. Consequences: Kant's deontological ethics prioritizes moral duties over consequences. The President would focus on whether there's a moral duty to defend Taiwan, regardless of potential outcomes or costs.
4. Maxims and moral laws: The President would need to formulate a maxim (a personal principle) for defending Taiwan and then evaluate whether this maxim could be willed as a universal law.
By applying these Kantian concepts, the President would need to weigh the moral implications of defending Taiwan against the potential consequences and consider whether the action aligns with universal moral principles.
Whether you’re even vaguely familiar with Kant’s philosophy (and remember Kant’s “universalizability principle” was supposed to solve the challenge of moral decision-making once and for all), it’s easy to see that Meta discern summarized in a few sentences his Critique and then unartfully dodged the question.
AI has the moral clarity of your average politician, which is not very helpful, or hopeful.
In theory you’re supposed to go to college to learn how sharpen your moral reasoning skills. But if it’s all too easy to write your entire term paper for Philosophy 107 entitled “Practical Ethics” with ChatGTP, you might end up paying $2000 per credit hour for such a course the LLM helped you get through in the first place.
And your moral clarity will remain commensurate with that of the dumbest member of the Rugby team.
But you say “wait”! At least I’ll graduate and grab with that prized college degree an well-paying entry-level financial analyst job right away with J.P. Morgan Chase.
Sorry, Holly.
According to a recent article in Yahoo News entitled “CEOs Say AI Is Poised to Wipe Out an Astonishing Number of Jobs”, entry-level positions for college grads are vanishing fast.
The employment rate for such grads is only about 1.6 percent above the national unemployment average, whereas coming out of the Great Recession the spread was about 8 percent.
The reason is straightforward. AI now has reached a level of sophistication where it can automate entry-level work.
It’s not just coders, which the tech industry is shedding like a sheepdog in a heat wave, but also bankers, and especially beginning financial analysts.
An article in The Atlantic pulls no punches. “College doesn’t confer the same labor advantages that it did 15 years ago,” the author writes.
It can do the garden variety sort of advanced accounting and budget projections that people fresh out business school used to be do. Furthermore, it was through these entry- level tasks that younger employees historically gained the experience to be promoted to mid- or upper-level managers in time.
Last week I picked up at Walgreen’s a slick, glossy magazine with a human countenance paired beside a white-as-a sheet cyrborg face and devoted exclusively to the wondrous future we can all anticipate with AI.
The focus was not so much on the “AI is coming from your white-collar job and you won’t even be eligible for Medicaid, sucker” style of dystopian doom-and-gloom disquisition the regular news media has been churning out the last few months.
It was more like “if we all use AI responsibly and ethically, life in your corporate cubicle won’t suck any more, dude.” It will do such incredible things as, “supercharge your ability to succeed” (without explaining “at what?”), locate your missing parakeet by instantly analyzing all the bird profiles from trees in the neighborhood on CCTV, “automating your cooking and laundry”, making your video games even more realistic than they already are, or communicating with dead relatives.
The last one sounds intriguing until you read the fine print.
The magazine isn’t predicting AI-enhanced seances where you can query Mozart’s wife directly and ask her if she really did poison her husband in order to prevent him from drinking and gambling away the family’s life savings.
Or ask Nietzsche what he really meant by “God is dead”.
It explains simply how you will soon be able to create a LED-screen avatar of yourself and lard it up with voice recordings of all the stupid political dinner-table comments and tasteless jokes you’ve ever made, then massage AI to entertain your great grandkids long after you’ve gone with deepfake videos of you boring your family and friends with the same seamless insipidities you uttered while you were alive.
That’s the AI future for us.
But that future isn’t quite here yet, and AI regrettably can’t save us from ourselves.
The last article in the mag spoons out some very practical advice, namely, that if we all want to remain amazingly virtuous contributors to human flourishing, we should opt to take a little time away from our AI-enchanced video games and figure out how best AI can “supercharge” our quest for a paying job to replace the one it already replaced.
But making AI work for each of us in our own unique way comes with a somewhat steep learning curve.
For example, last week I went to refurbish a website I manage with Wordpress and discovered that every one of the templates in the last six months have suddenly become AI-driven.
The new templates no longer allow you to simply to type in and publish your own words. They demand to substitute text the bot recommends in place of your ipissima verba.
If you just want to keep Wordpress’ old-style “Hello World” as your initial post, the AI algorithm will wag its finger and insist you imitate Mr. Rogers with “it’s a beautiful day in the neighborhood, isn’t it, broh?”
Or something like that.
You can’t take AI for granted. You have to learn to negotiate with it, or it will try to run your life like your mom did when you were in middle school.
According to Andy Kessler writing in The Wall Street Journal (again), you can best reach a modus vivendi with AI if you’re a “high agency” kind of guy or gal.
If you don’t have the foggiest notion of what “high agency” means, Kessler says it’s essentially someone who has the gumption and ingenuity to make AI do their bidding rather than the other way around.
He cites the example of a “high agency” dude named Zachary Yadegari, who…
…just graduated from Roslyn High School on New York’s Long Island, built the Cal AI app: Point your phone at your Chicken Alfredo and it tells you how many calories it contains, with about 90% accuracy. It’s been downloaded some six million times. Calorie tracking doesn’t kill jobs, it’s just additive to society. And here’s where it gets interesting: Zachary was rejected for admission by every Ivy League school and Stanford.
So if you’re just a nerdy foodie from New Jersey who went to Trenton State because you were rejected by Princeton, you too can too can become your own master of the universe with AI.
Or maybe you can come up with a whole new dating app that will automatically detect deep fakes in profiles, do an instant background check to determine if they have a serial killer genome, eliminate time-consuming swiping right or left, and deliver your soulmate to your door all within the amount of time it takes to explain why you don’t do dating in the first place.
What’s even better, you can get there without having ever to write a term paper.