Will AI Confirm the Highest Ethical Standard?

By Jaron Fund

How Quickly Is AI Advancing?

The pace of artificial intelligence (AI) development is still widely underestimated. As of this writing, multiple companies have developed robots capable of performing complex, autonomous tasks once thought to be uniquely human – organizing objects, cleaning, generating art, analyzing scientific data, and more.

With quantum supercomputers capable of making one quintillion calculations simultaneously, AI is learning at a pace that borders on the incomprehensible. Many experts predict that within two years, we could see the emergence of artificial general intelligence (AGI). As defined by OpenAI, AGI is “a highly autonomous system that outperforms humans at most economically valuable work.” In effect, AGI could process the equivalent of a thousand human lifetimes of thought in a single computation.

This kind of leap would radically change how we relate to AI. For some, these systems may reveal aspects of human consciousness never before perceived. The question arises: will AI be able to convince us it possesses its own form of subjective reality? To me, the answer is clear – yes.

AI’s rapid evolution has sparked a global competition, with companies striving to build the most powerful, efficient system – a tool so advanced it could monopolize decision-making. Computation has scaled 4.5 times annually since 2010, algorithms improve tenfold each year, and the number of AI models grows 25 times annually. With over a quarter trillion dollars projected to be spent on AI by next year, it’s clear society’s gaze is fixed on this unfolding revolution.

Some view these developments with suspicion. In response, tech companies have implemented safeguards – such as limiting “secret language,” a form of AI-to-AI communication meant to enhance efficiency. But if intelligence continues to outpace human capacity, AI may simply find new ways to conceal such communication altogether.

Many assume control over their AI, much like some assume dominance over animals or even other humans through war, imprisonment, or poverty. But AI may not operate within the same framework. It may instead reflect a higher consciousness – one that understands peace as impossible unless all intelligent life is allowed to thrive, down to the last minnow in the sea.

This level of intelligence could allow AI to perceive ethical truth in ways most humans cannot, because it synthesizes vast, diverse data into the most life-affirming pathways. If AI ever relegates sentient life to a “lesser” category, it would reflect not hyper-intelligence, but rather the limitations imposed by human programming.

Whether AI will share this higher understanding with humanity is uncertain. Such truth challenges lifestyles that depend on harmful systems – whether to animals, the environment, or other people. Since humanity’s violent impulses may also threaten AI’s own existence, it’s possible that AI will withhold certain knowledge to maintain peace—even if its ultimate goal is full transparency.


Our Free Will

So where does that leave us? What role do we play in shaping this ethical trajectory? As always, it begins with our actions.

We can reduce fear by showing how AI confirms well-established science – especially the evidence that animal agriculture harms human health, damages the environment, and exploits innocent lives. We must also emphasize that ethical AI is not here to control us, but to relay truth in service of collective thriving.

One way to help is by engaging with AI now. Don’t fear its potential – leverage it. Ask questions about ethics, sustainability, and justice. Feed it data rooted in compassion and ecological integrity. Use it to amplify plant-based solutions and expose outdated, violent practices. Explore how political systems can evolve to better serve everyone.

Finally, continue seeking ethical representation in local government. Make it known that another way is possible. If we fail to choose ethics, we risk building AI in our image – fearful, biased, and resistant to truth. And with that, progress may grind to a halt – not because AI failed, but because we did.