Responsible use of AI can revolutionize education - especially higher ed
But the choice is stark - build either "walls" or "windmills"
There is an Chinese proverb that goes something like this: “when the winds of change blow, some people build walls and others build windmills.”
That is what’s happening with the artificial intelligence (AI) revolution upending all the staid wisdom about education, work, and careers that has been the norm for many generations.
The temptation is to build walls – that is, devise rules, regulations, and even detailed and bureaucratic blueprints for institutional implementation which ultimately fail to shield against the winds of change.
But the only authentic alternative is to build windmills that will convert the same gusts into novel and beneficial forms of power that can transform how we live.
AI, of course, is not in any way a panacea.
Current AI systems recycle ideas rather than generating novel scientific hypotheses, limiting their role in advancing research. For instance, efforts to produce AI-generated papers often yield tangential or unoriginal content.
Second, complex reasoning remains a major challenge for customary AI models.
Third, overreliance on AI in scientific research, according to Yale anthropologist Lisa Messeri, can lead to “illusions of understanding and enable stifling patterns of groupthink among investigators”.
Many widespread fears about the perils of AI are unfounded. As AI actually becomes a “thing” not only in business and commerce but in everyday life, it becomes obvious that the fear, nurtured among tech boosters for years now, that the accelerating proficiency of AI platforms and systems with conjure up a Golem-like “superintelligence” capable of replacing or subjugating the human species, appears to be more fantasy than reality.
The present day limitations of AI are for the most part derived from their excessive dependency on “large language models” (LLMs).
LLMs were first theorized by information scientists in the 1940s and have become the key features of the data processing architectures of artificial intelligence algorithms.
These models deploy “deep learning” techniques that serve to predict and produce coherent text derived from patterns learned from billions, or trillions, of parameters. The mathematics governing these techniques is based on probabilities that are embedded in the fundamental structures of all languages.
In other words, the “grammar” of any given language allows you to make only a finite level of utterances, and LLMs with their astronomically fast computational capacities can search enormous bodies of archived texts in a short interval and generate intelligible answers.
However, their output is constrained by written “knowledge” that is already available online or in scannable documents.
LLMs cannot innovate. No LLM could have ever come up with Einstein’s theory of general relativity.
LLMs are wizards at summarizing, paraphrasing, and correlating information, but they are thoroughly incapable of engendering significant new insights.
But the immediate trajectory for the application of artificial intelligence, at least as it exists today, at a “whole society” level is quite promising. AI already is rapidly transforming business activity and business models.
Language models with their extraordinary data-crunching facility are already revolutionizing marketing, predictive analytics, and operational efficiency.
In the legal profession AI accelerates contract review, research, and case preparation by analyzing documents and predicting outcomes, freeing lawyers for strategic work. AI bots draft motions and manage discovery, with usage jumping from 22% to 80% in certain firms.
In medicine AI enhances diagnostics through imaging analysis for early cancer detection and personalizes treatments using patient data and genetics. It speeds drug discovery by sifting vast datasets and supports decision-making with real-time evidence-based tools, easing workforce shortages.
And it also is projected to bring about futuristic changes in essential industries such as agriculture.
But the overshadowing challenge for AI will be how it impacts the educational system. The litmus test will ultimately apply to higher education.
Recently I moderated a panel of educators entitled “The Existential Crisis of Learning and the Future of the Knowledge Economy”. The panel included K-12 and post-secondary learning professionals as well as corporate specialists.
The agenda for the panel discussion was intentionally designed to avoid as much as possible use of the word “education”. There was a good reason for that decision.
The word “education” connotes a formal and voluminous infrastructure of diverse but centrally regulated instructional practices as well as transaction-based criteria for social and economic advancement.
“Learning” in the richer, classical, and more imponderable sense of personal self-development invariably becomes a secondary priority.
That is especially true when it comes to a college education. The perennial parental question of “what are you going to do with that degree” always intrudes into any conversation about the purpose of a college education.
For millennial and GenZ with their steep, cumulative loads of student debt and the rapidly deteriorating job market for entry-level employees the question is becoming particularly consequential.
Various recent studies matching workforce with educational trends show that a college degree is increasingly becoming less relevant in the hiring process for career professionals.
The “PwC 2025 Global AI Jobs Barometer”, as an example, examined nearly one billion job postings and thousands of company financial reports across six continents. The most striking finding was that formal degree criteria are declining across all jobs, but the drift is even more pronounced for AI-exposed occupations.
A key metric indicates that skills sought by employers are changing 66% faster in occupations most exposed to AI, up from 25% the previous year. Wage premiums for AI skills reaches 56%, a spike of 25% from a year earlier.
Such data strongly suggests that practical competencies now command greater compensation than formal credentials in a broad gamut of cases.
Historically the typical “career-minded” college degree has been modeled as an expensive workplace preparation system that relieves employers of the responsibility for serious on-the-job training.
But employers are now assuming that job applicants already have the talents employers demand and that they can acquire them on their own time. Even if the new employee does not have those basic competencies, AI can more cheaply carry out the same tasks anyway.
But what employers are demanding in the era of AI is the singular ability to think.
The most authoritative institutional assessment comes from the World Economic Forum’s “2025 Future of Jobs Report”.
The report ranks analytical thinking as the top core skill employers require, with seven out of ten companies identifying it as essential. “Critical” and “creative” thinking ranks in the top among related proficiencies among the 26 identified core skills calculated for workplace success.
Despite concerns that AI may replace higher level cognitive and deliberative tasks performed by white collar employees, the report explicitly names critical thinking and analytical reasoning as competencies in which demand is accelerating faster than supply in today’s economy.
How do we learn to think? Not through career-minded – and costly – specialized, post-secondary technical training regimens but through what have historically been known as the “liberal arts”.
That is not to say higher education should not focus on advanced, technical curriculum, especially in the “STEM” fields – science, technology, engineering, mathematics. It should do everything it can to train computer scientists, engineering, and pre-med students in what they need to know about developments in their respective disciplines.
But career pathways for so-called “professional degrees” need to be recalibrated to de-emphasize what is the equivalent of job training and more systemically emphasize the development of “critical intelligence” in learners that will enable them to productively handle AI tools.
One of the roadblocks that this strategy will quickly encounter is the state of the liberal arts apart from STEM fields. A sizable number of academic studies in recent years have shown that the humanities and social sciences, which historically have always tilted left politically, now have become entrenched “monocultures” of left ideology that automatically exclude even slightly deviant viewpoints.
So-called viewpoint diversity” in higher education is, therefore, virtually non-existent.
Writing in the journal Inside Higher Ed, distinguished New York University psychology professor Jonathan Haidt lambastes this state of affairs. He remarks:
Taboos, blind spots, groupthink and the politicization of scientific standards haven’t just made academic research narrower and worse, these trends have alienated the general public and reduced public confidence in higher education since 2015, not just on the right but across the ideological spectrum. Unsurprisingly, the increasing ideological conformity of the professoriate is reflected in the decreasing range of ideas that students encounter in the classroom. A recent national study shows a narrow range of perspectives included on undergraduate syllabi on such controversial topics as the conflict between Israel and Palestine, racial bias in the criminal justice system, and abortion.
Unfortunately, monocultures do not weaken of their own accord. They tend to reinforce themselves over time.
Yet, even if rebalancing the views of professors may not be a viable option in the short run, the internet and the explosion of cheap AI solutions available to the general public may force change despite the resistance of the academy.
As one of our panelists in the aforementioned symposium has underscored, the internet itself has revolutionized access to potential “knowledge” acquisition which, if properly leveraged, can circumvent the inveterate political biases of conventional cognitive gatekeepers, including professors.
Ever since the accession of the internet in the late 1990s education futurists have been predicting the decline of the “sage on the stage” in the persona of the flashy faculty celebrity.
I myself laid out this vision a quarter century ago – the details of which naturally are now dated – in my book The Digital Revolution and the Coming of the Postmodern University. The vision has been deferred several decades because the internet until now has been a wilderness of inchoate information rather than an exhibit hall of disciplined and refined “knowledge”.
AI is rapidly changing that. For simple tutoring and baseline instruction in most fields both existing and emergent AI platforms, if appropriately prompted, can replace the drudgery and inefficiency of rudimentary “transactional” learning.
Both students and instructors, who have mastered the elementary skill sets for more advanced inquiry and sophisticated problem-solving, can now be loosed to tackle the kinds of creative and “critical” thinking that successful administration of AI requires.
Universities will still have an important role to play as emporia for the sorts of “critical intelligence” that the AI future foreshadows. But the current excessively bureaucratic federally enjoined regime of credit and “contact” hours is a major headache for reformers.
The sclerotic educational establishment continues to throw up walls. Yet there are many inside and outside that establishment that are quietly constructing windmills.
But despite all the frenzy to maintain an “industry” that is the last surviving citadel of feudal economy and culture, the ground is shuddering.
And the walls are coming down.


