(Opinion) Lou Cartier: Emotional intelligence from a robot? ...Saudi Arabia

Sport by : (GreeleyTribune) -

Happy early spring, folks. Turbulent politics notwithstanding, do you sense renewal in the air? Or a gnawing unease with a technical “threat or opportunity” — at work, school, community — for which you simply cannot find the handle?

Specifically, consider recent reportage on the future of work, a stimulating keynote at the Aims Welcome Center on the notorious AI “robot” in our midst and a current podcast interview with a foremost academic who talks fast but plainly on the subject.

Fair warning: I claim no expertise beyond that of curious readers in asking whether you regard AI (artificial intelligence) as an essential workplace tool or beguiling usurper. Is it destined to replace human intelligence or “assist” our learning and decision making?

At bottom, is AI more threat than opportunity?

To help the college community compare human intelligence and decision making with machine learning,” this spring’s “Conversation Day” at Aims focused on the future. An ambitious program engaged the reasonable apprehension of faculty, staff and senior leadership, including trustees (who graciously served breakfast).

Topic du jour was machines — robots, computers and space-age software, however disembodied. In a warmup video, futurist Jane McGonigal nudged the assembly toward “urgent optimism” with a purposed “game” or two. For example, describe the space you will wake up to in 10 years! Institute for the Future, which Aims leadership has explored for a year, anchored the day’s in-service training. For more on McGonigal and the institute’s work, go to legacy.iftf.org/imaginable.

For starters, remember Joaquim Phoenix’s infatuation with the movie “Her” 10 years ago? Disorienting. Unthinkable. Unimaginable — or is it?

My business students, aspiring to become dependable producers and authentic leaders, legitimately wonder whether AI’s ability to simulate EQ (emotional intelligence) will ever be good enough to replace personal agency and human leadership. Can one ever bring to bear that intangible quality of “practical wisdom” without human oversight?

Last week, the Wall Street Journal reported how readers harness “the robot” at work. A mechanical engineer asks for a “second set of eyes” on his advanced math, as he would of a seasoned colleague … A consultant happily exposes his assumptions to challenge by his “client presentation assistant” … A senior manager uses AI to anticipate questions, sharpen his team’s insights, fine-tune its messaging.

In class, my students tend to associate workforce success with certain “intangible” traits, notably, resilience, accountability, honesty, and optimism. They find resonance in popular sources as Colin Powell’s “Thirteen Rules of Leadership.”

Look up “It Worked for Me: In Life and Leadership,” Harper, 2012. Here is a sample:

Rule 1: Perspective

“This situation is not as bad as you think. Things will look better in the morning.” Powell counsels leaders to strip anxiety and fear from their decision making, lest emotion paralyze capacity to act. Best not to overreact, but to “swallow tough times with a grain of salt.”

Rule 9: Credit

Recognize how emotion motivates us all and acknowledgement of others builds trust. A leader’s success is built on the talents of those led, those we serve, so share credit with others. “People need recognition and a sense of worth as much as they need food and water.”

Rule 13: Optimism

Powell concludes with a call to believe, hope, rely on instinct and to be optimistic in word and deed. Labelling relentless optimism a “force multiplier,” he regards the leader who adapts rather than accepts defeat as a “force to be reckoned with.”

In the face of such optimism and resilience, empathy and understanding, moral intuition and “practical wisdom” — the emotional overlay — ought we expect the same from AI?

Can the “robot” truly emulate self-awareness, lived experience and lessons learned? Are decisions based on a kazillion data points and simulated emotional intelligence sufficient? Does it matter that Scarlett Johansson (“Her”) only mimics human empathy rather than truly feels it?

I leave you with Ethan Mollick, co-director of Wharton Business School’s Generative AI lab, who resists the either/or paradigm. Rather, Mollick challenges his students, colleagues and clients to imagine AI as “co-intelligence.” Agents are neither given nor exercise full autonomy.

His advice to business and business leaders: “Experiment and empower everyone in your organization to experiment.” Be wary of overreliance on enterprise organizations such as Google, Microsoft, Meta, et al. Rather, try your hand at assorted small steps … such as allowing AI to help shorten a document without losing anything important.

His latest YouTube interview applies: youtube.com/watch?v=VCA7wdZBuUE. If 18 minutes is too burdensome, consider starting with the “lightning round” at 13:00.

And ponder Mollick’s four rules:

Use AI for everything. Figure it out by using. Yes, AI will change my job. Double down on what you are best at. Prompting AI need not be hard. Just talk to it like a person. “This is the worst AI you are ever going to use.”

Lou Cartier teaches at Aims Community College, focusing on practical, ethical challenges facing business leaders and “soft skills” that underlie their people’s success. A member of the Adjunct Faculty Committee, he contributes to college-wide organizational development. Views and opinions here are solely the author’s and do not necessarily reflect those of Aims.

Read More Details
Finally We wish PressBee provided you with enough information of ( (Opinion) Lou Cartier: Emotional intelligence from a robot? )

Also on site :

Most Viewed Sport
جديد الاخبار