Ai and LLM’s are a black box of hidden digital alchemy. This needs to change.
In the pursuit of understanding one of technology’s most occulted (hidden) creations, Large Language Models (LLMs) such as those developed by OpenAI, Google, and Meta, we must confront a complexity that rivals the alchemical practices of old. The parallels between these modern digital chimeras and the hidden knowledge of black magic are not merely metaphorical but are grounded in the shared traits of obscurity, transformation, and the desire to maintain an elite priesthood, where the profane (i.e. us) is denied access into the sanctuary of true understanding.
The Proprietary Veil
The development of LLMs is cloaked in secrecy, much like the hidden knowledge of alchemists who guarded their philosophies and practices from prying eyes. The data used to train these models, often comprising an immense corpus of text from the internet, books, and other sources, remains largely undisclosed. This opacity is driven by the desire to protect intellectual property, avoid litigation over potentially copyright infringing data usage, and to maintain competitive edges. Even so-called “open-source” models like Meta’s LLaMA or the Falcon suite are not entirely transparent about their training data or methods (LeCun, 2023).
The algorithms that are used to create the neural networks and other under the hood coding for these Ai systems are also closely guarded secrets, and even if they weren’t, few on the planet comparatively, would even be able to decipher them. While politicians and other cry out that we must “reign in Ai” or regulate it somehow, few to none have any clue as to how these systems even work.
The “black box” nature of Ai and LLM’s raise significant questions about accountability, bias, and the ethical use of data. As noted by Bender et al. (2021) in their seminal paper “On the Dangers of Stochastic Parrots,” the lack of transparency in AI development are likely to lead to models that perpetuate or even amplify societal biases due to the opaque nature of their training datasets. This obvious flaw in the very nature of LLM’s and their construction is opined about by academics and intellectuals, while the actual problem metastasizes.
The Legal and Ethical Labyrinth
The legal landscape surrounding LLMs is as complex as the models themselves. Fear of copyright infringement lawsuits, as seen with cases involving AI-generated content, has led companies to shroud their data sources and methodologies in almost total secrecy. This scenario is not unlike the alchemist’s concern over sharing their dark secrets, fearing retribution from those who might see their craft as detrimental to them, or even just plain evil. Often, the latter is actually the case.
Moreover, the complexity of these algorithms and datasets with billions of parameters, means that only a minuscule fraction of individuals worldwide truly comprehend their workings. This small group of modern-day “magicians” holds significant power over the direction and implementation of AI technologies, reminiscent of the elite access to knowledge in medieval times (Artificial Intelligence Hits the Barrier of Meaning – Mitchell et al., 2019).
The Illusion of Understanding
In public discourse, there’s a somewhat humorous yet potentially quite dangerous irony in our collective understanding of AI. We marvel at its capabilities while remaining largely ignorant of its mechanics. This situation is akin to an audience watching a magician perform without understanding the trick; the wonder is there, but so is a veil of mystery, a lack of true depth.
The ethical implications here are profound. If AI is to serve humanity (and not in a cookbook sense), its creation should be fully transparent to allow for ethical oversight, scrutiny, and public trust. The call for transparency is echoed in many academic circles, where scholars like Timnit Gebru argue for more open practices in AI research to prevent the perpetuation of biases and to ensure that AI systems are developed in a way that reflects the diversity of human experience.
Timnit Gebru Says Artificial Intelligence Needs to Slow Down
Toward a New… Enlightenment?
In this Ai fueled digital age, we stand at a juncture where the secrets of AI could either lead to a new intellectual heights for humankind, or take us into a new Dark Ages of control and occultation of information. The world must shift away from the secretive practices of these code magicians to the transparency of truly open sourced science. As we move forward, the scholarly community, policymakers, and the tech industry must work in concert to demand a demystification of AI, ensuring that these incredibly powerful tools are not only transformative, but also trustworthy and ethically sound.
The path ahead requires not just technological innovation but a rethinking of how we share knowledge. Only through this new concept of total openness can we ensure that AI, much like any powerful magic, is used for the benefit of all, not just the few who can decipher its spells.
While there will be many who will decry that we must obscure or hide knowledge from the masses for any number of reasons, from capitalistic based arguments to ones of “national or global security,” these are primarily the songs of the elite, who wish to maintain a hierarchy of power and control that has existed on this planet in one way or another, for millennia. If we are to be a truly independent and free thinking race, we must understand that knowledge wants to be free, and the rewards of seeking maximum truth far outweigh the potential disasters.
As the wizards of the massive tech companies invest billions into Ai, the secrecy that they weave into their dark art increases exponentially.
A Brave New World or A New World Bravely?
While the high priests and kings of the modern digital age will say that humanity cannot be trusted with the knowledge they possess or even hope to one day achieve, they themselves have been shown over and over again to be unworthy of keeping it for themselves. Detractors of a truly open source world will point to the potential for extinction level event possibilities like biological threats or even more exotic issues like the release of Zero Point Energy (ZPE) as ways humans could kill off the entire planet, we must seek out a world where people are educated and offered a chance at a good life where these proclivities would be minimized.
Utopian fantasies aside, there should be no reason a small cabal of digital wizards commands ultimate control over billions of souls on Earth, simply because they wish to occult knowledge for their own personal benefit. The further away we get from technologies that empower humans to ones that control them, the worse off we’ll be.
The digital wizards behind the curtain may be smart, but they do not deserve the protection of secrecy their spells engender. Humanity and the future of our species as independent thinkers hangs in the balance. We must reject any system that seeks control without full transparency. Anything else is downright suicidal to our freedoms.
References:
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency.
Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Daumé III, H., & Crawford, K. (2021). Datasheets for Datasets. Communications of the ACM, 64(12), 86-92.
LeCun, Y. (2023). The Future of AI: Open Source or Closed Innovation? Keynote at NeurIPS Conference.
Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., … & Gebru, T. (2019). Model Cards for Model Reporting. Proceedings of the Conference on Fairness, Accountability, and Transparency.






















0 Comments