AI ODDISSEY -Physics and Artificial Intelligence

in #science6 years ago (edited)

Miguel Ángel Martínez Iradier

AI and automatic learning are trendy, but little is said about their parallelisms and contacts with fundamental physics, so full of meaning. In our increasingly artificial environment, we are more concerned with unraveling the black box of intelligent machines than with the great black and starry box of the universe. But don't get discouraged, because the two things might not be so separate in the end. It is always a question of coming out of the hole, that is, of finding universality; and this is usually at the antipodes of applied intelligence. Here one could say the opposite of what John wrote in his book of Revelation: "He who has understanding, let him not calculate…"

Keywords: Artificial Intelligence, two modes of intelligence, Renormalization, Scale relativity, hierarchical learning, Continuum

TWO MODES OF INTELLIGENCE

Possibly the greatest "success" so far of Artificial Intelligence (AI) has not been its concrete achievements, still quite modest, but convincing many that our own brains are a particular type of computers. Conviction that can hardly correspond to reality, given that, as Robert Epstein says, not only are we not born with "information, data, rules, software, knowledge, vocabularies, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols or buffers", but we never develop them.

While few might disagree with Epstein at this point, the pull of the computational paradigm remains irresistible. Not surprisingly, after all, as computation is, today, the only way available to replicate a series of human performances, and computational power is multiplied by a hundred every decade. There is also a perpetual challenge to translate that quantitative increase into a qualitative difference, which adds the indispensable intellectual interest to brilliant minds and developers.

We are not going to argue here that machines can perform tasks so far only within the reach of man, and that the spectrum of those tasks inevitably grows over time and the work of engineers, since that is obvious. The human being also has an important part of his intelligence that is developed specifically to carry out tasks, although he does so with a plasticity that has little to do with that of machines. But even this fundamental plasticity is due to the fact that it is not separated from an concrete but continuous perceptive background, which is the opposite of the strategies of cognition and representation based on high-level symbolic models of information processing.

Epstein cites as an example the 1995 landmark article by McBeath and colleagues, which determines how baseball players estimate the trajectory of fly balls —an achievement that posed a challenge to machines. McBeath showed that the players' reaction completely ignores the various relevant factors in a numerical analysis of real-time trajectories, since what they do is move in such a way that the ball remains in a constant visual relationship, which is much simpler than explaining and is "completely free of calculations, representations, and algorithms".

This example has been so often cited that the defenders of "ecological" or "radical embodied" theories themselves often ask not to exaggerate its scope. They certainly have reason to do so, since human intelligence demonstrations show aspects as varied as can be imagined. However, the case in question is precisely one of trajectories, the same trajectories with which analytical geometry, calculus, mathematical physics and ultimately the whole modern scientific revolution started. If all this came out of the analysis over time of the famous conic sections, it should be even more revealing for the computational paradigm, since it questions it not in its possible achievements but in its very baseline.

The embodied or "ecological" approach which emphasizes the physical dependence of the brain on the rest of the body and the environment is not exactly a novelty. Some have called it "postcognitivism", although clear expositions of this vision, in Merleau-Ponty or James Gibson, are in fact prior to the foundational milestones of cognitive psychology and the first babbles of AI. What this approach says is that the intelligent response cannot be dissociated from the perceptual continuum of experience, however elusive and practically impossible to delimit as it may be.

Obviously, the computational model, relying on information processing, cannot resign itself to this. The nature of intelligence is secondary to it, being its priority to reproduce intelligent or highly selective behaviors in well-defined tasks. And since they achieve it in increasing degrees, there is no reason for surrender.

The problem is not only the social impact, but to pretend that the intelligence in which we participate, more than we possess, is the same that one computers are reproducing in such a fragmented and super-specialized way. A pretension that would be laughable if it were not for the mimicry and pervasive character of the social trends.

The processor would be the ultimate worker, and intelligence, subordinate as never before, would be the act of digging into the mine of the new data reality with a certain output. It's then understandable why today operational intelligence is so overvalued: it is sold to us in every way because it's what sharpens the tool of this new worker-miner.

Society, or rather social intelligence, has to conceive intelligence in general as that which allows us to ascend on the scale of the social body itself. And this would have a double purpose: on the one hand, the search for flight from a blurry monster that refers everything to itself and yet is already the incarnation of a flight; on the other, to the extent that one ascends the ladder, that of exerting control over subordinate functions with the purpose of consolidating and incarnating that uncontrollable flight in a body that refers everything to itself.

This "double bind" is in line with the orchestration of the topic of intelligent machines in the media, as a threat/promise: they are going to end with the employment/they are going to end with the burden of work. But progress always had us between the stick and the carrot.

As can be seen, this reading of social intelligence is already "radically embodied" and "profoundly echological" —rather than trying to represent something surely unrepresentable, it is in the middle of its dynamic, but trying to withdraw from identification allowing it to break free by itself.

Thus we see two great modes of intelligence: the one which consists simply in realizing, in being awake to what is happening, and the one which considers, discriminates, compares and manipulates in an endless process. The one who separates itself from tasks, and the one that turns to them. The one who contemplates and the one who acts. The characteristic modes, respectively, of Homo sapiens, the man who knows, and Homo faber, the man who does. The desert of consciousness and the jungle of thought. Awareness by itself and intentionality. Aristotle's agent intellect and the patient intellect —keeping in mind that for the Greek the agent intellect is the one immobile, and the one that moves is the patient.

The former knows indiscernibly by identity; the latter by identification, in which he loses his discernment. From the first, all trace is lost in a continuum without qualities; from the second, the trace is followed but in leaps and bounds, due to the discreetness of its operations.

We don't need to continue to understand that both modes don't exclude each other more than they are mutually implied; we cannot get rid of the first mode, as some pretend, without equally getting rid of the second. And since it also seems that we cannot get rid of the second without getting rid of the first, there will never be a lack of experts in artificial intelligence who will ensure that things like stacked neural networks, which in reality are only multilayered mathematical operations, succeed in demystifying not only operational intelligence, but even consciousness, even if so little can be said about it.

Others think that the only possible demystification or disidentification is to recognize that the brain is not an autonomous entity and that we will never find in it a materialization of memory or the symbolic representations that define computers. However, this would have to open the doors to the contemplation of other wonders.

The so-called ecological approach is the really physical one, if we understand the physical as support or substance, and not as operation. Circumstantially, it happens that also modern physics and its field theories are based on decidedly discrete and operationalist criteria, even though fields are supposed to describe the mechanics of a continuum, even if it is a particulate continuum. The point particles of modern field theories are just the operational shortcut, even though we can be sure that material particles, considered at proper scale, must exhibit surface and extension since extension is inherent in matter.

Physics itself has become an algorithmic, statistical subject, in the precise sense that all description and interpretation is at the service of calculus. In this way the idea of a physical substance evaporates, there are only operations left. It is no coincidence that Feynman worked in the computation group of the Manhattan project and simultaneously in the formalisms of quantum electrodynamics. And so we come to the present times, in which many physicists would willingly get rid of the hindrance of physical reality to reduce the universe to a giant computer in permanent digital flickering of ones and zeros. Matter doesn't matter, only information.

But perceptible matter, which tells us that there are soft and hard objects, belongs to continuum mechanics, and there can only be one Continuum by definition. There cannot be a material continuum on the one hand, and a continuum of motion, operations, or intelligence on the other. The Continuum always refers us to the possible evolutions of a primitively homogeneous medium in which distinctions are impossible and if possible are always compensated. Without homogeneity there is also no conservation of momentum, this being, more than force, the authentic basis of modern physics. Conservation is the essence of what we understand by physical reality —continuity- while actions or operations will always be secondary.

Fundamental physics and artificial intelligence have much more than accidental contact, and not only on the side of discrete operations. The mathematics that underlies deep learning, even with more dimensions, is as familiar to physics as linear algebra and its vectors, matrices and tensors —the heart of the continuum mechanics that governs the constitutive law of materials science, classical electrodynamics or Relativity.

It would be wrong to say that quantum mechanics makes space or time discrete; there is nothing like that. The only discreet thing in QM is action, something for which we still have no due justification today. To prove that time and space maintain their continuity at a microscopic level, one only has to see that the electrons also describe elliptical orbits as planets, simply a class of the conical sections we mentioned.

If there are any doubts about the connection between physics and AI, it will suffice to remember that physicists Mehta and Schwab demonstrated in 2014 that an algorithm of deep neural networks for image recognition works exactly the same as the renormalization group of quantum field theories such as QED that has later spread to other areas, from phase transitions to cosmology to fluid mechanics.

A neural network confronted with the critical point of phase transition in the model of a magnet, where the system becomes fractal, automatically applied the renormalization algorithms to identify the process. Ilya Nemenman, one more theoretical physicist turned to adaptive biology and biophysics, has not resisted saying: “Extracting relevant features in the context of statistical physics and extracting relevant features in the context of deep learning are not just similar words, they are one and the same.”

THE ADMINISTRATION OF A POWER

Delegating problems to AI not only has a risk but a price for social engineering, as its solutions, as in all areas, are new problems, but problems even further away from human intuition and its immediate environment. It is falling further and further behind the indecipherable inferences of increasingly complex and therefore more opaque systems. A physician can no longer know how an intelligent system has reached its conclusions about patients, even if it has contributed to its design or database. As the opacity index increases, so does dependency.

There is no need to imagine the chaos of a digital inferno to realize that this is undesirable. The solutions themselves are becoming more and more complex, not so much because of the increase in details, but because of the loss of intelligibility. Only this intelligibility makes details nuances instead of unpredictable decision making. So they talk now of explainable or trustworthy AI.

We have already had time to see to what extent the indiscriminate application of instrumental intelligence leads us fully to the unintelligible and the unnameable.

That is, in a complex natural process there is already an implicit presence of intelligence; we could say that it is just equilibrium, but, given that in an organism it also has to balance and encompass the inveterate partiality of our mind, the invisible half of its equation, we could still consider it intelligent in a more fundamental sense than our own thought.

In any case, it does not take a great deal of knowledge to understand that we are not computers and the intelligence does not arise from information processing; rather, it is there where it is often invested and buried. Materiality, the physical character of intelligence resides at the opposite end of the operational, but this end is never really separate from the continuum and resists quantification.

INTELLIGENCE AND PURPOSE

Intelligence is precisely appreciating nuances without the need for calculations; on the contrary, our calculations can estimate thresholds but they do not have an intrinsic sense of the degree, of the nuance, since the symbolic as such is not part of the physical process. And yet it is evident that an infinite number of mixed digital/analogical systems can be built, and that innumerable forms of integration are possible, something which connects with the evolution of human/machine interfaces.

There are no calculations in intelligence, as there are no calculations in nature. As Fresnel said, nature does not care about analytical difficulties. What would not be our awe at the world if for a moment we realized that nothing we see depends on calculus, no matter how much it may later coincide with it. Computation is the routine in which our mind is immersed.

With the scientific revolution we thought we would definitively pass from a world with a finality ordered by a teleological intelligence to a world merely intelligible in terms of a calculus description; but it is here where the mirage flourishes in all its splendor. Calculus does not liberate us from teleology, on the contrary calculus, as an heuristic tool, cannot be more oriented to a purpose. And in addition, it is the heuristics of calculus itself that has led us to the heights of the incomprehensible.

This is evident in Kepler's problem. Despite what all the books say, Newton's law of gravity does not explain the shape of the ellipse with the central body in one of the foci. One starts from the integral of the curve —the global form- to derive the instantaneous velocities, but the vectors never cancel were not mixed up orbital velocity and innate motion. There is therefore no local conservation of contemporary forces, but a global one, and that is why it is customary to work with the principle of integral action, the Lagrangian.

All quantum mechanics is based on a principle of action, and quantum field theories more specifically in a Lagrangian one. Any principle of action, be it Lagrangian or not, is integral by definition and therefore the derivatives applied to it are implicitly subordinated to a purpose, as Planck himself and many others have not ceased to admit.

It is evident that procedure as artificially and blatantly ad hoc as the renormalization of QED cannot really take place in nature, or else we would have to think that every photon, every electron, and every particle performs endless series of calculations at every instant. And do we call this "blind forces"?

The easiest answer is to say that physics is increasingly statistical, and statistics itself must encompass in principle all conceivable functions of AI. Although this may be true in the uttermost general sense, no single type of statistics is going to lead us to the order we appreciate in nature or in intelligent behavior —neither in application nor in interpretation, neither in the universal nor in the particular. The statistical bias of the observed behaviors is too improbable.

"For each clump, a bubble": In a medium totally homogeneous in principle any increase in density at one point would correspond to a decrease at another, and the same is true for motion, energy and other quantities. Differentiations and motions would take place in time, while the underlying continuity before being altered would remain neither inside nor outside, but enveloping it. In this sense the continuum is neither at the beginning nor at the end, but in the midst of all changes, since it permanently compensates them. The first intelligence, the first undifferentiated mode of which we have spoken, would coincide with the immediacy of this continuum.

However, the continuum of so-called "real numbers" that seems to prevail in physics cannot but be an idealization, since in the world we can only count discrete things and events, in whole numbers and at most in rational numbers or fractions.

Calculus cannot fail to draw increasing complexities in its vision of nature, but all these arabesques do not belong to it in property, but are rather the image it shows on the surface of the mirror that we hold.

FROM RENORMALIZATION TO SCALE RELATIVITY

Despite being created by man, pushed to the limit, these multi-layered neural networks and their response are black boxes nearly as enigmatic as many of nature's behaviors; though surely you can't beat nature's complexity.

In Feynman's integral path approach, in continuity with Huygens universal principle of propagation, trajectories become more irregular on a small scale and between two points particles have potentially an infinity of undifferentiated trajectories. This feature and considering point particles instead of extended ones are the main sources of infinities to be cancelled by renormalization calculation procedures.

It has been asked why renormalization is capable of producing pattern recognition if these objects, for example faces in a landscape, are not fractal or subject to recurrences at different scales. But that is to think from the point of view of the object considered rather than from the subject and the process or transformation that may take place in order to arrive at recognition. Renormalization already implies to a large extent the notions of relevance, selection and elimination that other theories such as biology inspired adaptative models or the information bottleneck theory use to explain these black boxes.

Something similar to renormalization can be applied to the same space-time obtaining an anomalous dimension. This is what the astrophysicist Laurent Nottale did in 1992, proposing Scale Relativity. This was recognized at the time as a brilliant idea but the inclusion in the physical continuum of non-differentiable manifolds such as fractals represents an huge leap into the void that largely questions its feasibility.

The Heisenberg uncertainty relations involve a transition of the spatial coordinates of the particle to a fractal dimension 2 around the length of de Broglie, which Nottale extends to the temporal coordinates for de Broglie time. The renormalizable quantum field theories themselves show a dependence on the scale of energy that enters for the first time explicitly into the equations of the current and limited standard model. What Nottale does is to extend this asymptotic evolution and take it to the rank of a general principle.

We already know that physics cannot be reduced just to motion and extension, but what else can it be is something that is not even remotely clear. If the theory of relativity demands that no coordinate system be privileged for motion, scale relativity demands the same for scale and its resolution, and with it for other associated quantities such as density. If there is any way to move from the extensive to the intensive without letting go of the thread of motion/extension that has shaped the entire physics, surely there is no other more natural and inevitable than this one. Between material points and material particles we would need a resolution tensor to "fill" portions of space.

As it is known, the invariance of the speed of light regardless of the speed of the observer can only be made compatible with the classical equations of motion by means of the additional dimension in the Minkowski space-time continuum; otherwise Special Relativity, a theory focused on local conservation, could only present point events cut out and split from the classical electrodynamic continuum.

I'll try to figure out this principle in the most elementary way. Since Pearson it has been said that an observer traveling at the speed of light would not perceive any motion being in an "eternal now". Within the specifically kinematic framework of relativity, this is the only way you can open a window to something beyond motion. But what would this eye see in its endless approach to the absolute limit? As the motion slows down, his vision would logarithmically zoom into the scale of space and time. The asymptotic approach should admit its own transformation: the change of resolution by covariance that Nottale demands, the laws of motion being replaced at high energies by the laws of scale.

In Nottale's words, quantum field theories, with standard renormalization procedures, "correspond rather to a Galilean version of the theory of scale relativity", and only work within certain limits. For Nottale, renormalization is a semi-group, since it allows the integration of larger scales from smaller ones, while the successful application of scale relativity would allow the inverse operation, obtaining the smaller ones from the larger ones.

The scale recursion also made him think of the application of the principle to complex systems in general, from biological to social organizations, whose functioning depends simultaneously on different hierarchical levels or scales.

Nicolae Mazilu and Maricel Agop have recently undertaken a thorough evaluation of Nottale´s principle in the light of the history of physics and mathematics. The introduction to this work states nothing less that "once the principle of scale invariance is adopted, there is no other way to follow but the right way". And indeed, if the speed of light in the continuum and the Planck length are invariant, unreachable and absolute, the scale has to be relative almost by definition, and by the same token non-differentiable. We don't see how this conclusion can be avoided.

Apart from this, scale relativity is an inherent necessity from the statistical point of view and the theory of measurement that must accompany it.

Fractal geometry can then be seen as a way of mediating between the continuous and the discrete within the above mentioned limits. And this leads us to recursiveness and the confrontation of algorithms both with the laws of nature and the description of their contours.

Of course scale relativity has its own big problems with calculus, but even before that it also requires, as Mazilu and Agop emphasize, a physical interpretation of variables in a well-defined technical sense, quite the same than that already formulated by C. G. Darwin in 1927 —"the translation of the formal mathematical solution, which is in wave form, into terms of particles".

The thread to get out of this labyrinth is lost with Born's statistical interpretation, and that is why the two Romanian physicists undertake an authentic re-foundation of the theory that goes through the historical milestones of the Kepler problem, Hertz's mechanics, de Broglie's wave-corpuscle duality, Schrödinger's wave function and his color theory, Madelung's equivalent hydrodynamic equation, Cartan or Berry, among others.

The distinction between material particle and material point, or extended and point particle, its inevitable application to wave mechanics, the interpretation of the geometric phase in quantum potential and the return of the holographic principle to its original context in light mark out this meticulous and logical reconstruction in the spirit of classical physics.

In fact Agop and Mazilu's reading of scale relativity offers not only a plausible explanation for the dilemmas of quantum mechanics, but also for the occurrence of fractal structures in nature, which in no way can be sustained by mathematics alone. On the other hand, ScR reminds very much the de Broglie-Bohm non-local interpretation of quantum mechanics in term of hidden variables, except for the fact that the parameters are not hidden at all, because "they are simply coordinates in the geometric space continuum".

Fractality on scales is not reduced to the mere spatial recurrence of Russian dolls with which we are most familiar, but affects the geometry of space-time itself. In my opinion, just as we speak of different modalities of renormalization, of positions in real space and the space of moments, with relativity of scale, where coordinates are functions instead of numbers, we could ultimately speak of three fundamental modalities: scale for mass and its density, scale for motion (moment, force or energy), and scale of length that correspond well with our notions of time, space and causality, or matter, space and time; in addition to some generalized coordinates to define the transformations of these three aspects.

In any case, the physical interpretation as Mazilu and Agop understand it, with its coordinate transformations, connects well with the idea of representation in deep learning —the coordinates in which the data are configured. This notion of representation, also based on a mathematics of multidimensional tensors —the resolution in scale relativity being equally a tensor- makes the difference in this emerging group of methods also known as hierarchical learning, with different levels of representation corresponding to the hierarchy of conceptual levels of abstraction. And a fractal is essentially a cascading hierarchy of scales.

The more we use statistics, the more we need interpretation. This is the unquestionable conclusion reached both in physics and in automatic learning methods. Moreover, in this machine learning, the more statistics are used, more profoundly these are involved in physical parameters if they want to define its object.

This unexpected turn occurs precisely now when so many physicists would like to give up interpretation and even principles, to get by on computation or prediction alone. And this would be, leaving aside other considerations, a basic reason for the disdain shown towards this and other theories that demand more space for interpretation in modern physics. An interpretation that lies naturally at the end and is indispensable in any application.

Simply put, in every human enterprise there are principles, means and ends. In physics calculus is also the means and interpretation the end, but this seems to have been forgotten since the advent of the algorithmic style inaugurated by quantum electrodynamics. Now Artificial Intelligence, expressly teleological and focused on the objectives, gives us back its relevance. Only that we know perfectly well that neither the human brain works by means of symbolic representations, nor does nature, nor the particles, nor the waves.

Applied human intelligence and its mechanical replicas have to be purposive by definition. But to pretend that nature is oriented to ends through discrete operations is the height of absurdity, and Aristotle himself would have been the last one to affirm this. Thus, this "first intelligence" cannot be goal-oriented, but must be the reference for the intelligence applied to tasks.

This background intelligence or consciousness cannot be anything other than the invariance of the continuum, the homogeneity of reference on which events and things are drawn, and it would have to be identical in nature and in man, no matter how much each entity can extract a different output depending on its circumstance and internal structure.

Since this primary intelligence is indiscernible and simple, anyone would say that its usefulness is null. But is the global feature of realization useless? What happens is that it is neither quantifiable nor measurable. But to say that there are really two intelligences, one useful and the other useless, instead of two modes, would be total nonsense, as it would be to say that there is no continuity, when the discrete numerical analysis is condemned to follow behind its "ideal" pattern, or that this continuum is only physical, when attributes cannot be applied to the continuum itself.

It is not necessary to stop at philosophical analysis to see that intelligence is always going to be more than the ability to do tasks, and that something else is always indispensable for utilitarian and applied intelligence —just as the continuum cannot be dispensed with either in special, general, or scale relativity.

How much truth can there be in the idea of scale relativity? There aren't yet enough concrete results to judge it. But we can be sure of one thing: the requirement of continuity is absolutely necessary in nature and even in our generic sense of reality; the requirement of differentiability is not. This is man's exclusive business. So this principle will always have something new to contribute.

Let us not forget that Nottale is an astrophysicist in search of a fundamental idea; his starting point is not simple principles, but de facto complexity in constellations of data and observables. His proposal must be understood as a return from the interpretation to the principles, a turn that Mazilu and Agop justify with much more specific arguments.

On the other hand, this is only one among a greater number of theories that include scale factors —such as the gauge theory of gravity elaborated with geometric calculus in a flat space- that are differentiable and that it would be necessary to compare. This gauge theory demonstrates that Poincaré's program to elaborate a relativistic theory in ordinary Euclidean space by modifying the laws of optics was perfectly justified. In any case what remains is that in the absence of an absolute criterion the scale can only be relative and based on comparison.

You can also look for more elementary models; follow, for example, Weber's line of relational mechanics, the grandmother of relativity, and deal with questions of scale as you do with the forces, retarded potentials and speed of light in that theory, returning to a flat and differentiable space and delimiting recursive functions via self-interaction, that "silly" idea in Feynman's words. But what is a self-interaction? It is that silly thing that seems to do an isolated particle when it turns out that it's not isolated at all. And the same goes for our brain, although, in this case, on the contrary, the problem is that it seems too intelligent to us.

Critics say that scale relativity is not a theory, and nothing could be more true if you keep in mind quantum field theories. No, scale relativity is first of all a principle and a project, and so much the better if it is not a theory. Besides, wasn't a guiding principle so desperately sought in fundamental physics? Something merely intelligible. The theories mentioned are shielded to underpin the consistency of the calculations, and are like small isles in the sea; they have had their historical moment and contain enduring teachings but basically belong to a past that will not return. They often serve only as Procrustes' bed to force the explanation of phenomena that are not at all understood but are "predicted" from its data.

You don't open a black box with another closed box.

Today it's quite fortunate and a great advantage not to be one of these theories that subordinates principles and interpretation to calculus instead of making it an instrument. In the new environment computations are increasingly left to the machines and to the same extent one separates from it. Consciousness, if something, is distance; so it is not impossible for AI to help us recover the most underestimated functions of our natural intelligence.

Nottale, Mazilu and Agop have all insisted that scale relativity is not only a general principle of physics but also of knowledge. This claims, which may sound excessive, can now be tested. In the arena of AI the question is whether it can be "discovered" by machines confronted with appropriate models just as it has occurred with renormalization. It will not be without experimental base since the emerging technological environment works at ever smaller scales where all this has many ways of being relevant. And, conversely, whether it helps to develop more efficient or more intelligent algorithms.

No particle accelerators are required, and a huge experimental front is accessible and available. It is totally false that new physics can only be found at high energies, the very controversial boundary between classical and quantum behavior has an huge continuos range of different scales depending on the case.

There is no spearhead like the IA for the current multi-specialist in our tower of Babel, nor can we imagine a better one in the foreseeable future. Physicists, mathematicians, programmers, developers, biologists, linguists, psychologists,... Here there is room to quickly explore possibilities that the different specialties, with their great inertia and defensive barriers, are not willing to consider. Ideas, if we speak of physics, such as scale relativity, retrodictive determinism, and many others that for various reasons go against the course adopted by the specialty. It is no coincidence that a good number of AI experts are physicist moving now into an area so enthusiastically open and prone to stochastic tinkering.

Too enthusiastically, if one thinks that today the mathematics that characterize a physical system can end up defining the control vectors for the behavior of individuals, social groups and human populations. These things are already routinely applied for the modulation of consumption or opinion. Like resolution, attention itself can also be described with tensors —and attention is as close as we can get to consciousness itself. As if it were something accidental, people speak of the "perverse effects" of technologies, when it comes to the most deliberate intentions and purposes.

If they want to put us all in one box, let's make this one as big as the whole universe.

THE CIRCLE OF UNDERSTANDING AND THE THREAD IN THE DIGITAL LABYRINTH

And this way, very much to our regret, it seems that a circle closes. In the struggle between society and nature, we have made of nature an object and we end up subjected to the same treatment, and that with which we shape one ends up sculpting the other. That's the same old and now only becomes more formalized, but being executed consciously we have no excuses.

However, and simultaneously, another circle is being closed by linking more and more tightly categories so far separated as the physics of space and the physics of matter, the extremes that above and below define our human scenario. It depends on their understanding that physics, and not only physics, has access to real intelligibility and a universality that would somehow compensate for the lack of scruples of applied knowledge. We can only reject the latter, but not annul it.

By restoring their inevitable extension to material particles, we make it possible for "external space to enter internal space". Here the old saying that not only the drop melts into the ocean, but also the ocean melts into the drop, comes back to mind. And this happens at the same time that, in another order of things, the social inundates the individual and data floods theories.

If it can be said that we are the fundamental scale, the same could be said of any other being on any other scale; but in any case the world cannot fail to be seen from within. This does not depend on the number of dimensions but on the nature of the continuum.

Not only the subject does not cease to be indispensable in this whole process, but the very substance that supports everything has to be revealed as subject —for the same reason that it turns out that intelligence cannot simply be "inside" the brain. The universe is not a giant computer -utterly grotesque idea-, but computers will allow us to navigate much more freely through its infinite space of functions. And they will do so because the incentive will have changed and we will have devolve calculus to its proper place between principles and interpretation; for it is with these that the circle of understanding closes.

Physics began with clear concepts such as inertia, force, mass, space or time to finally realize that all of them become more and more mysterious; and conversely, in neural networks everything begins as an enigma and the hope is that at some point the processes will be clarified. Naturally, the basic objective of automatic learning is to find the input functions f(X) for the output data Y. This type of reverse engineering has always existed in physics and is the essential part that makes it predictive. And since in both cases a certain type of mathematical language is shared, the two approaches are highly complementary.

In physics we move from principles to the interpretation of phenomena and data through calculus. In a more global approach, this would be only half the circle, the semicircle of the quantitative/analytical ordering so typical of physics. The second half of the circle would be a return to principles in qualitative and continuity terms, ideally ignoring calculus as much as possible —if we have been able to assume that nature and intelligence exist entirely independently of it. To my mind, this is where the challenge lies now.

A continuous and qualitative description, with permission of space-time algebra, would have to be necessarily more ambiguous than one specified by the calculus. But this is precisely where intelligence resides, in finding significant nuances without the need for or with fewer specifications. This paradox is insoluble, but it shows us the way back to internalize, that not to humanize, knowledge —as it is the high-level formalized knowledge or the same machines that are a particular human end.

CONCLUDING REMARKS

Our "two Intelligences" are very similar to the two trees of Paradise in the fable; we have not stopped grabbing and handling the Tree of the Science of Good and Evil that has brought us here; but the Tree of Life is still there, intact as ever.

It's clear that primary intelligence is more than secondary or applied intelligence, as well as that the latter is more than pattern recognition or the list of tasks dealt with by hierarchical learning.

We agree with Epstein that the full understanding of the human brain may still be, if at all, centuries away. There are also in its development too many "useless" things from the utilitarian point of view, that are anyhow necessary to maintain, way beyond the whims of memory, the continuity from which our identity derives.

And even if we should ever get to know our brain, with its complex relationship to the rest of the body and the environment, we would still know nothing of consciousness as such, we would only know of the discontinuous emergence of thoughts against that unqualified background which realizes them.

The continuum of ordinary human experience and its body has little to do with that of fundamental physics, the former seems to us very concrete and the latter too abstract. However, fundamental physics seems so abstract to us precisely because it lost the thread of continuity with the classical physics that Mazilu and Agop are trying to re-establish. Even at very different levels, the two kinds of continuum remain in the same direction.

AI systems are expressly oriented to a purpose; and the same is true of physical theories, so blatantly oriented to prediction. Since both use statistics massively, from this point of view there is nothing to surprise. What is surprising, and never sufficiently pondered, is that nature achieves without purpose what we would not be able to do without it, and this, by way of continuity, leads us to the supposition of a first intelligence that is its reference just as the speed of light is the reference for motion and its radiation is the reference for the quantum of action.

One could quietly forget about all physics and scale problems, that an idea remains intact: the source of intelligence and nature is alien to computation/calculus/finality. With precision, a notorious surveyor left said, "the spirit is freed only when it ceases to be a support".

McBeath hit the ball farther away than anyone, even if it had gone outside the ballpark.

References
R. Epstein,(2016) The empty brain, Aeon
L. Nottale, (1992) The Theory of Scale Relativity
N. Mazilu, M. Agop, (2018) The Mathematical Principles of the Scale Relativity Physics — I. History and Physics
N. Mazilu, M. Agop, Role of surface gauging in extended particle interactions: The case for spin
N. Mazilu, M. Agop Skyrmions: A Great Finishing Touch to Classical Newtonian Philosophy
P. Mehta, D.J. Schwab, An exact mapping between the Variational Renormalization Group and Deep Learning
N. Wolchover, A Common Logic to Seeing Cats and Cosmos, Quanta magazine
McBeath, M. K., Shaffer, D. M., & Kaiser, M. K. (1995) How baseball outfielders determine where to run to catch fly balls, Science, 268(5210), 569-573.
Fink P.W, Foo P.S, Warren WH, (2009) Catching fly balls in virtual reality: a critical test of the outfielder problem
D. Hestenes, Gauge Theory Gravity with Geometric Calculus
M. A. M. Iradier, Self-energy and Self-interaction
M. A. M. Iradier, The multi-specialist and the tower of Babel
F. Kafka, The Zürau aphorisms

Coin Marketplace

STEEM 0.22
TRX 0.26
JST 0.039
BTC 98362.82
ETH 3451.59
USDT 1.00
SBD 3.21