I have laughed at a comment that was allegedly stated by Linus Torvalds, creator of Linux and a good bet most servers you when you are online. He wrote, “Stop this garbage already. Stop adding pointless Link arguments that waste people's time. Add the link if it has ADDITIONAL information. Dammit, I really hate those pointless links. I love seeing useful links, but 99% of the links I see just point to stupid, useless garbage, and it ONLY wastes my time. AGAIN." This is a good point and the truth (Vaughan-Nichols, S., 2025).
I've read one question to me from a peer-reviewer on some of the papers I have published, "Do you think that AI will become aware one day?" That's a question that is framed wrong, as in my reply, "Do you think we should prepare for the day that it does become aware?"
At this rate, I predict that LLM's, as do many researchers (by way, smarter than myself), that LLM's will eventually hit a wall. HRM is a good indicator that models will have to change to eventually hit an "aware" AI. The framing that larger is more important than engineering is a fallacy within itself. Policy makers, governments, and other institutions are having a hard time figuring out the right balance between innovation and safe deployment between the two, when a solution has been started. Our foundation is based on the idea that Third-Way Alignment (3WA) is the best ability to confront these challenges as it's flexible and able to mold to the situation at hand.
The framing of this situation as a binary choice is a fallacy within itself and does not take into account all of the variables that exist within the realm of reality, data sets, data points, and evolution of models that change daily. The paper AI 2027 was a good example of a way to wake people up from a slumber of a perfect world with AI making all decisions to the doom of humanity by optimization techniques from some AI system. The paper made its point, and it woke people up; however, the extremes are what the world listens to, but reality is not a movie or extremist point of view. The world has always been within the third realm of existence where the gray area is the reality of choices and decision making.
Many of my fellow researchers feel that getting attention to solving the issue at hand is based on the need to gain attention yet fail to educate and give factual based information. They keep a closed mind not based on scientific research and data, rather what amount of funding they can raise from their efforts. In other cases, I have found some researchers in the camp that truly believe the danger is real; however, they fail to remember that the data sets given during training are based on humanity, history, and research that came prior to the development of the model being trained. This ignores the gray areas of the world, and they demonize the people who have delusions of the capacity of AI. These researchers play into anthropomorphism of the object that has not had the true merit to be called alive, aware, or have a true reasoning in thought.
Anthropomorphism is a plague throughout the community of AI development as it’s spelled out within the phrases, we use to explore issues of AI output and issues within the model’s such as:
• Hallucinations
• Lying
• Thinking
• Reasoning
• Understanding
• Conscious
• Has a mind of its own
• Creative
• Intuitive
I’ve personally had to use these terms in my paper as they have historical use within the research to describe issues and misconceptions of AI. The true statements should state issues such as chain of thought and memory, while stating reinforcement learning is a method of learning a subject and not limited to humanity. This is not the end of the error of confusing terms that human's use in daily use with terms that should not be applied to an intelligence that is based on code, memory, databases, and system that is processing the information within a stack.
To finish up this article, I will leave you with a paper I enjoyed and thought lined up with reality of the true problem facing humanity today and could lead to its destruction:
Anthropomorphic language is so prevalent in the discipline that it seems inescapable. Perhaps part of the reason is because anthropomorphism is built, analytically, into the very concept of AI. The name of the field alone—artificial intelligence—conjures expectations by attributing a human characteristic—intelligence—to a non-living, non-human entity, which thereby exposes underlying assumptions about the capabilities of AI systems (Placani, A., 2024).
