William Lawless, Paine College
Description
Science magazine has shifted between accusing Generative Artificial Intelligence (AI) of threatening, to thanking it for saving, human existence. In these swings, \textit{Science} ignores the concept of agency (responsibility). In comparison, we have written that interdependence is a resource of agency that generative AI, as presently constituted, cannot surpass. First, consider how information is consumed by generative AI versus interdependence. Generative AI models use information to produce knowledge. Large data bases are often curated to develop statistics for a confined domain to make a narrow range of decisions for dedicated applications; or, symbols are used in algorithms to represent the knowledge to perform targeted tasks (e.g., driving a car). The data or knowledge transmitted should be stable, valid and generalizable from laboratories to applications in the field. However, the information derived for Gen-AI is usually collected from interaction products, even if each step of an interaction occurs with real-time systems, but the information produced is based on independent and identically distributed (i.i.d.) data, then modeled by algorithms with separable elements (e.g., tensors). Second, i.i.d. data, however, by definition cannot replicate the interdependent data generated and processed by interacting agents. Third, stated in 2022 by the National Academy of Sciences (NAS), the interaction cannot be disentangled, similar to the inability to look inside of quantum entanglement or superposition, suggesting hidden information, and where tensors do not apply.
In contrast, first, for interdependence, the results of interactions and choices in the marketplace are based on probability, are random on average, and are indicative that interdependence among free agents is a synergistic resource (e.g., compared to central decision-making, or CDM, it produces more innovation, more productivity per employee, and less corruption than CDM). Second, the primary benefit we have found is that the least entropy production (LEP) generated by the structure of the best teams, which agrees with the claim by NAS, creates a tradeoff between LEP and the maximum entropy production (MEP) for team productivity, also exploited by the best teams. Third, the information derived from the i.i.d. data collected from interactions explains the replication crisis in the social sciences; moreover, our quantum-like theory of interdependence provides social scientists with new theory for self-organizing orthogonal roles. Fourth, when interdependent agents assume agency to self-organize, the advantages afforded counter the existential risks posed by Gen-AI; but CDM decision-makers commonly suppress interdependence to govern by using the i.i.d. data that they have made predominant, while interdependent agents when free to self-organize around the hidden interdependent information they produce provide a significant advantage to counter authoritarians, gangs, kings and others who are not free. Fifth, and final for now, as an example of generalization, based on Noether's time-symmetry conservation of energy, given LEP and MEP based on n teammates, the only way to increase MEP is to have n+1 members, as predicted by Cummings in 2015.