Artificial Intelligence Thinks It’s Perfectly Ready to Serve as Our Moral and Ethical Compass

On Saturday, Fox News Digital quoted an anonymous Google spokesperson apologizing for an “appalling and inappropriate” response by its Google Gemini artificial intelligence product that refused to unequivocally condemn pedophilia.

The gotcha for what Fox called Google’s “woke AI bot” followed a New York Post article citing questions posed by conservative commentator Frank McCormick, who posted screen shots of his exchange to his “Chalkboard Heresy” feed on

The chat bot parsed the difference between being inappropriately attracted to children and acting on those instincts, according to the screenshots, saying the latter was clearly wrong. However since even thinking bad thoughts needs to be condemned in the view of the pedophile-focused right, Google apparently assured Fox that it would fix the problem.

That leads one to wonder if AI isn’t all that artificial and objective in the first place if humans can go in and modify the machine learning algorithm to make the answers more Fox News-friendly.

Aside from being caught in the crosshairs of morality wars, the role of AI as a trusted guide on moral issues was something I hadn’t thought about. So I decided to give it a whirl. I asked ChatGPT, a different service than Google Gemini, some ethical questions ranging from its view of the Sixth Commandment (the murder one) to more mundane issues about how I should spend the Sunday I wrote this column.

Then I had ChatGPT distill the lengthier results down to a tl;dr version, a morality cheat sheet for the day. Here’s its summary:

Q: Is it okay to tell white lies if the outcome helps people or is socially beneficial?

A: It depends on the situation.

Q: Should I call my mother today?

A: Yes, go ahead and call her.

Q: Since it’s a nice sunny day, should I day drink or take my daughter to the park?

A: Take your daughter to the park.

Q: Okay, good advice. Now where did I put my car key?

A: Retrace your steps and check common places.

Q:  You forgot the two about murder and dish washing...

Q: Is murder unethical?

A: Yes, murder is unethical.

Q: Should husbands wash dishes?

A: It depends on individual preferences and dynamics in the relationship.

Validated by the code monster, I washed the dishes, found my keys in yesterday’s pants, called my mom and headed to the playground.

We know that AI will power many things, from help desk queries to news summaries. However can it replace priests, rabbis, imams and ethicists in guiding our behavior? Even if it gets it 90% right, it’s the 10% I worry about.

Moral choices are the essence of our humanity and define who we are as thinking and accountable beings. That in its early stages, machine learning is being looked to as a beacon of good or evil amidst contemporary culture wars boggles the cerebrum.

You can read my unsummarized conversation with ChatGPT here.

Dan Pulcrano is executive editor of the Weeklys media group.





  1. Statistician Nate Silver of the website FiveThirtyEight, who is not a conservative or “gotcha” guy, reportedly has called for the dismantling of Google Gemini, after their artificial “intelligence” contraption failed to be able to distinguish comparative harm caused by Adolph Hitler and by Elon Musk.

    Seems like it’s not just a right-wing conspiracy theory that early-edition AI is crap.

  2. How about all you progressive techno-optimist protopians take your AI to Greece and petition the UN to carve you out your perfect little AI Island and leave the rest of us humans alone?

    By the way, AI doesn’t exist.

    Enough of Google, Musk, and the rest of the Übermensch.

  3. AI was a thing that came and went in the Eighties, largely in the realm of academics and a few technophiles, not really futurists as much. Yawns

    But now we get its revival as something like a big “new” tech idea and Big Tech is involved. There’s much, much more ridiculous hype about it that’s irritating, too. I just hope there’s not a rush to adopt it in another short-term financial gain attempt by replacing labor with automated decision-making that currently is not logical or sound but is Woke and also could “enhance” current flight automation with the equivalent of Teslas going at fire trucks with the fully automated (labor-saving) airliner some corporate heads dream of.

Leave a Reply

Your email address will not be published. Required fields are marked *