Was it just last month when Sam Altman released ChatGPT into a gobsmacked consumer market? Only last week when Big Tech answered, upping the arms race in large language generative artificial intelligence models?
Time blurs. For each new day brings fresh AI applications and/or critical review by academic researchers, professional media, and plain old citizens having fun … or making mischief.
In all, a common theme, a growing anxiety: too much, too fast, too few guardrails.
In the face of relentless commercial release and eager experimentation in our homes, classrooms, and workspaces, one senses a creative tension. I feel an obligation to help my students through it. At least I will try.
A generation ago, social psychologists studied FOMO, the “Fear of Missing Out” on experiences enjoyed by everyone else. Today, investigators connect FOMO to increased levels of anxiety from heavy usage of AI tools. “Addiction” comes to mind ( techxplore.com/news/2025-03-addiction-ai-trigger-anxiety.html).
Disclaimer: I once disdained the use of handheld computers when learning math formulas. I continue to deplore the mobile phone that never leaves our side, only begrudgingly tucked away for a 75-minute class or a 15-minute respite on the plane for takeoff and landing.
Lately I am less interested in process and protocol than in the moral philosophy and ethical reasoning that business and education bring to our exploration of transformative tech.
May we stipulate that AI is a powerful “assistant” used everywhere? Organizations cannot move fast enough to automate repetitive tasks (scheduling meetings, taking orders), crunch large data sets (finding patterns, forecasting customer behavior), and draft “routine” documents of all sorts (supporting HR, inventory and operations, marketing and sales).
I personally respect deep thinkers in education, government, business and technology who ponder AI’s “real and imminent threat to society.” They warn of bogus information going unchallenged, malicious code escaping conventional cybersecurity, and underage users, their brains not fully developed, given unchallenged access to AI “companionship!”
Granted, the frontier of invention recedes by the day. But still, are we expecting more from tech than we ask of ourselves? For wonderfully literate reflection on whether humankind will succumb to “market forces,” allowing technology to take us places we dare not go, listen to Henry Kissinger, extraordinary statesperson and speaker of truth to power:
First was “The Age of AI in Our Human Future,” a book which received major attention from the national media in 2021. Now comes “Genesis: Artificial Intelligence, Hope, and the Human Spirit,” a 12-minute video expression (with coauthors) of more urgent, more imaginative concern — please stay for the end! ( www.youtube.com/watch?v=OT_S4g5G5N4)
For a complementary perspective, Ann Skeet of the Markkula Center for Applied Ethics at Santa Clara University is more winsome. In “Preserving the Power of Human Connection,” Skeet probes the creative tension between artists and distributors in the entertainment industry. She links Hollywood rivalries to journalists’ search for truth and credible sources. Each portrays obstacles in pursuit of human dignity, a special theme of the Center this month.
“Artificial intelligence can generate new music, but it is unlikely to replace live performances or the magic of timing that creates great comedy,” she writes. Skeet further cites the influence of the internet on journalism, likening it to the underlying business models of the recording industry or television ( www.scu.edu/ethics/leadership-ethics-blog/preserving-the-power-of-human-connection/).
With such examples, Markkula’s director reinforces the role of AI as a tool to empower human intellect and effectiveness, rather than replace it. Both Kissinger and Skeet favor the role of the AI “robot” as a partner in learning, capable of “raw intelligence” and troubleshooting, while humans are necessarily the wise, empathic, ethical conscience in the partnership.
Back at the ranch, we educators keep our guard, lest unfettered embrace of generative AI’s language and image models sabotage rather than ease student learning. This places policies and instructional guidelines for harnessing the AI’s tools in near constant renewal.
For example, is tasking the “robot” to research an assigned essay topic, create an outline, or write the first draft of a research paper a reasonable, smart use of everyone’s precious time and energy? Or a bridge too far, a lapse in professional judgment, an ethical failure?
For the record, we faculty challenge ourselves to:
• Closely watch perceived threats and opportunities from AI. As coaches and mentors, we encourage creativity AND hold students accountable for academic honesty.
• In practice, more purposeful learning activities and more thoughtful instructions and rubrics (assignment metrics) lead to deeper student cognitive skills.
• Deploy our personal creativity and moral authority to help students conquer “writer’s block,” explore context, structure an argument, and process available feedback.
Another semester’s end closing in, I soon will discover how my students managed their “partnership” with AI to research industries and refine operational themes for their final Company Analysis team project (thank you, Rob, Roger, Alex, Sarah, Eric and Melanie.)
May their independent judgment shine through.
Lou Cartier teaches at Aims Community College, focusing on business fundamentals, challenges of leadership, and “soft skills” that underlie workplace success. He also contributes to college wide assessment of student learning. Views and opinions here are solely the author’s and do not necessarily reflect those of Aims.
Read More Details
Finally We wish PressBee provided you with enough information of ( Lou Cartier: Ongoing exploration of AI’s ethical boundaries )
Also on site :