Showing posts with label quantum. Show all posts
Showing posts with label quantum. Show all posts

Thursday, August 31, 2023

Project Stargate Held That The Universe Is A Projection Of A Lower Dimensional Reality

nature  |  At the time, reversible computing was widely considered impossible. A conventional digital computer is assembled from an array of logic gates — ANDs, ORs, XORs and so on — in which, generally, two inputs become one output. The input information is erased, producing heat, and the process cannot be reversed. With Margolus and a young Italian electrical engineer, Tommaso Toffoli, Fredkin showed that certain gates with three inputs and three outputs — what became known as Fredkin and Toffoli gates — could be arranged such that all the intermediate steps of any possible computation could be preserved, allowing the process to be reversed on completion. As they set out in a seminal 1982 paper, a computer built with those gates might, theoretically at least, produce no waste heat and thus consume no energy1.

This seemed initially no more than a curiosity. Fredkin felt that the concept might help in the development of more efficient computers with less wasted heat, but there was no practical way to realize the idea fully using classical computers. In 1981, however, history took a new turn, when Fredkin and Toffoli organized the Physics of Computation Symposium at MIT. Feynman was among the luminaries present. In a now famous contribution, he suggested that, rather than trying to simulate quantum phenomena with conventional digital computers, some physical systems that exhibit quantum behaviour might be better tools.

This talk is widely seen as ushering in the age of quantum computers, which harness the full power of quantum mechanics to solve certain problems — such as the quantum-simulation problem that Feynman was addressing — much faster than any classical computer can. Four decades on, small quantum computers are now in development. The electronics, lasers and cooling systems needed to make them work consume a lot of power, but the quantum logical operations themselves are pretty much lossless.

Digital physics

Reversible computation “was an essential precondition really, for being able to conceive of quantum computers”, says Seth Lloyd, a mechanical engineer at MIT who in 1993 developed what is considered the first realizable concept for a quantum computer2. Although the IBM physicist Charles Bennett had also produced models of a reversible computation, Lloyd adds, it was the zero-dissipation versions described by Fredkin, Toffoli and Margolus that ended up becoming the models on which quantum computation were built.

For the cosmos to have been produced by a system of data bits at the tiny Planck scale — a scale at which present theories of physics are expected to break down — space and time must be made up of discrete, quantized entities. The effect of such a granular space-time might show up in tiny differences, for example, in how long it takes light of various frequencies to propagate across billions of light years. Really pinning down the idea, however, would probably require a quantum theory of gravity that establishes the relationship between the effects of Einstein’s general theory of relativity at the macro scale and quantum effects on the micro scale. This has so far eluded theorists. Here, the digital universe might just help itself out. Favoured routes towards quantum theories of gravitation are gradually starting to look more computational in nature, says Lloyd — for example the holographic principle introduced by ‘t Hooft, which holds that our world is a projection of a lower-dimensional reality. “It seems hopeful that these quantum digital universe ideas might be able to shed some light on some of these mysteries,” says Lloyd.

That would be just the latest twist in an unconventional story. Fredkin himself thought that his lack of a typical education in physics was, in part, what enabled him to arrive at his distinctive views on the subject. Lloyd tends to agree. “I think if he had had a more conventional education, if he’d come up through the ranks and had taken the standard physics courses and so on, maybe he would have done less interesting work.”

 

The Cellular Automaton Interpretation of Quantum Mechanics

springer  |  This book presents the deterministic view of quantum mechanics developed by Nobel Laureate Gerard 't Hooft.

Dissatisfied with the uncomfortable gaps in the way conventional quantum mechanics meshes with the classical world, 't Hooft has revived the old hidden variable ideas, but now in a much more systematic way than usual. In this, quantum mechanics is viewed as a tool rather than a theory.

The author gives examples of models that are classical in essence, but can be analysed by the use of quantum techniques, and argues that even the Standard Model, together with gravitational interactions, might be viewed as a quantum mechanical approach to analysing a system that could be classical at its core. He shows how this approach, even though it is based on hidden variables, can be plausibly reconciled with Bell's theorem, and how the usual objections voiced against the idea of ‘superdeterminism' can be overcome, at least in principle.

This framework elegantly explains - and automatically cures - the problems of the wave function collapse and the measurement problem. Even the existence of an “arrow of time" can perhaps be explained in a more elegant way than usual. As well as reviewing the author’s earlier work in the field, the book also contains many new observations and calculations. It provides stimulating reading for all physicists working on the foundations of quantum theory.

Monday, July 10, 2023

music, pattern, and the neurostructures of time (redux 10/18/09)

Noology | The term pattern has recently gained prominence as key term in understanding mankind's quest to make the universe intelligible, to fashion a Cosmos from the pure Chaos of the undiscriminate swarm of photons, electrons, air pressure changes, chemical and physical stimulants, that organisms are exposed to every instant of their living existence. On pattern are based not only the sciences, but also human society, and in the wider sense, life, and the lawfulness of the universe. The present contribution connects Gregory Bateson's work as a recent trailblazer in the recognition of the role of pattern with Goethe's earlier work on Morphology and Metamorphosis. It links this to current scientific understanding of the working of the brain, as neuronal activation patterns, consisting of oscillation fields and logical relation structures of neuronal assemblies, treated formally as coupled dynamic systems and neuronal attractors, which are characterized by their space-time-dynamics. These are called neuronal resonance patterns, and patterns of patterns: metapatterns. Thus, pattern is the "infrastructure" of neuronal processing happening in our brains, below, and a few miliseconds before our working consciousness experiences the "phainomena" and "noumena", of our discernible impressions and thoughts. This spatio-temporal neuronal infrastructure is then re-interpreted in a Neo-Pythagorean way, as the "inner music of the brain", which supports a new validation for the old Pythagorean world views.

Tuesday, July 04, 2023

Time To Revisit Pulsed Vibrating Plasmas And The Pais Effect

glennrocess |  So far, not a single physicist of note has been willing to give Dr. Pais’ claims anything but short shrift, and the Navy has since admitted they were never able to prove the Pais Effect actually existed, much less enabled any of Dr. Pais’ wondrous inventions. Soooo…that’s the end of the story, right? It was all just a case of “too good to be true”, right?

Nope. Don’t take off that neck brace just yet. Whiplash #2 was included in the fine print.

It turns out that during TheDrive.com’s investigation, they found a document submitted by NAVAIR’s Chief Scientist/Chief Technology Officer James Sheehy wherein he stated that Dr. Pais’ room temperature superconductor is “operable and enabled via the physics described in the patent application”.

Whiskey Tango Foxtrot, Oscar? The Navy says the Pais Effect doesn’t work, but NAVAIR’s Chief Scientist/CTO gave a sworn statement saying it does work! While I tend to be strongly skeptical of wild claims by any scientist, the ones in charge of research are responsible for keeping the pointy end of our military’s spear the sharpest on the planet, and tend to be hard-nosed, take-no-BS types. Of course they will lie through their teeth as the situation demands, but why would the one in charge lie about this?

I often tell my wife that one thing every military retiree learns along the way is how to justify (almost) anything. At a moment’s notice we can pump out barely-plausible excuses that would make OJ’s lawyers blush. This also means that we’re usually pretty good at figuring out why a government or military functionary would do something out of the ordinary. In this case, I can think of three possibilities: (1) Drs. Pais and Sheehy are both wrong and full of bovine excrement, (2) Dr. Pais is wrong, Dr. Sheehy knows it, but says it works, and (3) they’re both right and the Navy is now lying when it says that the Pais Effect cannot be proven to work.

  1. Dr. Pais and Dr. Sheehy are both wrong. While possible, this scenario is the least likely for the reasons I stated above. I think it is highly unlikely that Dr. Sheehy, being who and what he is, would have issued a sworn statement saying the Pais Effect worked if it didn’t actually work.
  2. Dr. Pais is wrong, Dr. Sheehy knows it, but says it works anyway. This is possible. In fact, Forbes.com posited that this could be a disinformation campaign vis-à-vis Reagan’s Strategic Defense Initiative, colloquially known as “Star Wars”, in that if we spend a few million dollars on a project and make wild claims as to its success, perhaps China will futilely waste hundreds of billions searching down the same Pais Effect rabbit hole. In fact, as early as 2017, Dr. Sheehy already said that China is currently investigating the effect. One must wonder, then, if China is doing the same thing in reverse with the Pais Effect idea and now our best and brightest are tearing their hair out trying to develop something that isn’t real.
  3. Both Dr. Pais and Dr. Sheehy are right, and the Navy is now lying about it. Maybe. Definitely maybe. Despite what the rest of the professional physics community says about the Pais Effect, IF it works, IF Drs. Pais and Sheehy are right, the Navy would have very good reason to deny it. The claimed inventions in and of themselves would radically change the balance of military and political power around the planet, so keeping such information under wraps would allow America to develop the technology and maintain sociopolitical supremacy much as we did by being the first to develop atomic and thermonuclear bombs. Of course, China would have the same motivation and would be much more effective at keeping it secret. “What is this thing called a Freedom Of Information Act request? Off to the reeducation camp with you!”

Indeed, hope springs eternal in the breasts of geeks, nerds, and retired sailors. Yes, we would dearly love for the Pais Effect to be real, for the dream of having a DeLorean with a Mr. Fusion pumping out the obligatory 1.21 gigawatts (did I mention Dr. Pais also patented a compact fusion reactor and may have worked on a spacetime modification weapons system?). But no. 

“Extraordinary claims require extraordinary evidence”, a phrase popularized by Carl Sagan, must be applied here. Until there is hard, publicly-verifiable proof that the Pais Effect (and all its follow-on technologies) works, Dr. Pais’ claims belong on the shelf alongside those of Pons and Fleischmann.

Tuesday, June 06, 2023

Sir Roger Penrose: Artificial Intelligence Is A Misnomer

moonofalabama  |  'Artificial Intelligence' Is (Mostly) Glorified Pattern Recognition

This somewhat funny narrative about an 'Artificial Intelligence' simulation by the U.S. airforce appeared yesterday and got widely picked up by various mainstream media:

However, perhaps one of the most fascinating presentations came from Col Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations, USAF, who provided an insight into the benefits and hazards in more autonomous weapon systems.
...
He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they  did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

He went on: “We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

(SEAD = Suppression of Enemy Air Defenses, SAM = Surface to Air Missile)

In the earl 1990s I worked at a University, first to write a Ph.D. in economics and management and then as associated lecturer for IT and programming. A large part of the (never finished) Ph.D. thesis was a discussion of various optimization algorithms. I programmed each and tested them on training and real world data. Some of those mathematical algos are deterministic. They always deliver the correct result. Some are not deterministic. They just estimated the outcome and give some confidence measure or probability on how correct the presented result may be. Most of the later involved some kind of Bayesisan statistics. Then there were the (related) 'Artificial Intelligence' algos, i.e. 'machine learning'.

Artificial Intelligence is a misnomer for the (ab-)use of a family of computerized pattern recognition methods.

Well structured and labeled data is used to train the models to later have them recognize 'things' in unstructured data. Once the 'things' are found some additional algorithm can act on them.

I programmed some of these as backpropagation networks. They would, for example, 'learn' to 'read' pictures  of the numbers 0 to 9 and to present the correct numerical output. To push the 'learning' into the right direction during the serial iterations that train the network one needs a reward function or reward equation. It tells the network if the results of an iteration are 'right' or 'wrong'. For 'reading' visual representations of numbers that is quite simple. One sets up a table with the visual representations and manually adds the numerical value one sees. After the algo has finished its guess a lookup in the table will tell if it were right or wrong. A 'reward' is given when the result was correct. The model will reiterate and 'learn' from there.

Once trained on numbers written in Courier typography the model is likely to also recognize numbers written upside down in Times New Roman even though they look different.

The reward function for reading 0 to 9 is simple. But the formulation of a reward function quickly evolves into a huge problem when one works, as I did, on multi-dimensional (simulated) real world management problems. The one described by the airforce colonel above is a good example for the potential mistakes. Presented with a huge amount of real world data and a reward function that is somewhat wrong or too limited a machine learning algorithm may later come up with results that are unforeseen, impossible to execute or prohibited.

Currently there is some hype about a family of large language models like ChatGPT. The program reads natural language input and processes it into some related natural language content output. That is not new. The first Artificial Linguistic Internet Computer Entity (Alice) was developed by Joseph Weizenbaum at MIT in the early 1960s. I had funny chats with ELIZA in the 1980s on a mainframe terminal. ChatGPT is a bit niftier and its iterative results, i.e. the 'conversations' it creates, may well astonish some people. But the hype around it is unwarranted.

Behind those language models are machine learning algos that have been trained by large amounts of human speech sucked from the internet. They were trained with speech patterns to then generate speech patterns. The learning part is problem number one. The material these models have been trained with is inherently biased. Did the human trainers who selected the training data include user comments lifted from pornographic sites or did they exclude those? Ethics may have argued for excluding them. But if the model is supposed to give real world results the data from porn sites must be included. How does one prevent remnants from such comments from sneaking into a conversations with kids that the model may later generate? There is a myriad of such problems. Does one include New York Times pieces in the training set even though one knows that they are highly biased? Will a model be allowed to produce hateful output? What is hateful? Who decides? How is that reflected in its reward function?

Currently the factual correctness of the output of the best large language models is an estimated 80%. They process symbols and pattern but have no understanding of what those symbols or pattern represent. They can not solve mathematical and logical problems, not even very basic ones.

There are niche applications, like translating written languages, where AI or pattern recognition has amazing results. But one still can not trust them to get every word right. The models can be assistants but one will always have to double check their results.

Overall the correctness of current AI models is still way too low to allow them to decide any real world situation. More data or more computing power will not change that. If one wants to overcome their limitations one will need to find some fundamentally new ideas.

Monday, June 05, 2023

Does It Make Sense To Talk About "Scale Free Cognition" In The Context Of Light Cones?

arvix  | Broadly speaking, twistor theory is a framework for encoding physical information on space-time as geometric data on a complex projective space, known as a twistor space. The relationship between space-time and twistor space is non-local and has some surprising consequences, which we explore in these lectures. Starting with a review of the twistor correspondence for four-dimensional Minkowski space, we describe some of twistor theory’s historic successes (e.g., describing free fields and integrable systems) as well as some of its historic shortcomings. We then discuss how in recent years many of these
problems have been overcome, with a view to understanding how twistor theory is applied
to the study of perturbative QFT today.

These lectures were given in 2017 at the XIII Modave Summer School in mathematical physics.

Try Fitting Assembly/Constructor Theory Over Twistor Space

quantamagazine  |  Assembly theory started when Cronin asked why, given the astronomical number of ways to combine different atoms, nature makes some molecules and not others. It’s one thing to say that an object is possible according to the laws of physics; it’s another to say there’s an actual pathway for making it from its component parts. “Assembly theory was developed to capture my intuition that complex molecules can’t just emerge into existence because the combinatorial space is too vast,” Cronin said.

“We live in a recursively structured universe,” Walker said. “Most structure has to be built on memory of the past. The information is built up over time.”

Assembly theory makes the seemingly uncontroversial assumption that complex objects arise from combining many simpler objects. The theory says it’s possible to objectively measure an object’s complexity by considering how it got made. That’s done by calculating the minimum number of steps needed to make the object from its ingredients, which is quantified as the assembly index (AI).

In addition, for a complex object to be scientifically interesting, there has to be a lot of it. Very complex things can arise from random assembly processes — for example, you can make proteinlike molecules by linking any old amino acids into chains. In general, though, these random molecules won’t do anything of interest, such as behaving like an enzyme. And the chances of getting two identical molecules in this way are vanishingly small.

Functional enzymes, however, are made reliably again and again in biology, because they are assembled not at random but from genetic instructions that are inherited across generations. So while finding a single, highly complex molecule doesn’t tell you anything about how it was made, finding many identical complex molecules is improbable unless some orchestrated process — perhaps life — is at work.

Assembly theory predicts that objects like us can’t arise in isolation — that some complex objects can only occur in conjunction with others. This makes intuitive sense; the universe could never produce just a single human. To make any humans at all, it had to make a whole bunch of us.

In accounting for specific, actual entities like humans in general (and you and me in particular), traditional physics is only of so much use. It provides the laws of nature, and assumes that specific outcomes are the result of specific initial conditions. In this view, we must have been somehow encoded in the first moments of the universe. But it surely requires extremely fine-tuned initial conditions to make Homo sapiens (let alone you) inevitable.

Assembly theory, its advocates say, escapes from that kind of overdetermined picture. Here, the initial conditions don’t matter much. Rather, the information needed to make specific objects like us wasn’t there at the outset but accumulates in the unfolding process of cosmic evolution — it frees us from having to place all that responsibility on an impossibly fine-tuned Big Bang. The information “is in the path,” Walker said, “not the initial conditions.”

Cronin and Walker aren’t the only scientists attempting to explain how the keys to observed reality might not lie in universal laws but in the ways that some objects are assembled or transformed into others. The theoretical physicist Chiara Marletto of the University of Oxford is developing a similar idea with the physicist David Deutsch. Their approach, which they call constructor theory and which Marletto considers “close in spirit” to assembly theory, considers which types of transformations are and are not possible.

“Constructor theory talks about the universe of tasks able to make certain transformations,” Cronin said. “It can be thought of as bounding what can happen within the laws of physics.” Assembly theory, he says, adds time and history into that equation.

To explain why some objects get made but others don’t, assembly theory identifies a nested hierarchy of four distinct “universes.”

In the Assembly Universe, all permutations of the basic building blocks are allowed. In the Assembly Possible, the laws of physics constrain these combinations, so only some objects are feasible. The Assembly Contingent then prunes the vast array of physically allowed objects by picking out those that can actually be assembled along possible paths. The fourth universe is the Assembly Observed, which includes just those assembly processes that have generated the specific objects we actually see.

Merrill Sherman/Quanta Magazine; source: https://doi.org/10.48550/arXiv.2206.02279

Assembly theory explores the structure of all these universes, using ideas taken from the mathematical study of graphs, or networks of interlinked nodes. It is “an objects-first theory,” Walker said, where “the things [in the theory] are the objects that are actually made, not their components.”

To understand how assembly processes operate within these notional universes, consider the problem of Darwinian evolution. Conventionally, evolution is something that “just happened” once replicating molecules arose by chance — a view that risks being a tautology, because it seems to say that evolution started once evolvable molecules existed. Instead, advocates of both assembly and constructor theory are seeking “a quantitative understanding of evolution rooted in physics,” Marletto said.

According to assembly theory, before Darwinian evolution can proceed, something has to select for multiple copies of high-AI objects from the Assembly Possible. Chemistry alone, Cronin said, might be capable of that — by narrowing down relatively complex molecules to a small subset. Ordinary chemical reactions already “select” certain products out of all the possible permutations because they have faster reaction rates.

The specific conditions in the prebiotic environment, such as temperature or catalytic mineral surfaces, could thus have begun winnowing the pool of life’s molecular precursors from among those in the Assembly Possible. According to assembly theory, these prebiotic preferences will be “remembered” in today’s biological molecules: They encode their own history. Once Darwinian selection took over, it favored those objects that were better able to replicate themselves. In the process, this encoding of history became stronger still. That’s precisely why scientists can use the molecular structures of proteins and DNA to make deductions about the evolutionary relationships of organisms.

Thus, assembly theory “provides a framework to unify descriptions of selection across physics and biology,” Cronin, Walker and colleagues wrote. “The ‘more assembled’ an object is, the more selection is required for it to come into existence.”

“We’re trying to make a theory that explains how life arises from chemistry,” Cronin said, “and doing it in a rigorous, empirically verifiable way.”

 

Sunday, June 04, 2023

Penrose's "Missing" Link Between The Physics Of The Large And The Physics Of The Small

wikipedia  |  The Penrose interpretation is a speculation by Roger Penrose about the relationship between quantum mechanics and general relativity. Penrose proposes that a quantum state remains in superposition until the difference of space-time curvature attains a significant level.[1][2][3]

Penrose's idea is inspired by quantum gravity, because it uses both the physical constants and . It is an alternative to the Copenhagen interpretation, which posits that superposition fails when an observation is made (but that it is non-objective in nature), and the many-worlds interpretation, which states that alternative outcomes of a superposition are equally "real", while their mutual decoherence precludes subsequent observable interactions.

Penrose's idea is a type of objective collapse theory. For these theories, the wavefunction is a physical wave, which experiences wave function collapse as a physical process, with observers not having any special role. Penrose theorises that the wave function cannot be sustained in superposition beyond a certain energy difference between the quantum states. He gives an approximate value for this difference: a Planck mass worth of matter, which he calls the "'one-graviton' level".[1] He then hypothesizes that this energy difference causes the wave function to collapse to a single state, with a probability based on its amplitude in the original wave function, a procedure derived from standard quantum mechanics. Penrose's "'one-graviton' level" criterion forms the basis of his prediction, providing an objective criterion for wave function collapse.[1] Despite the difficulties of specifying this in a rigorous way, he proposes that the basis states into which the collapse takes place are mathematically described by the stationary solutions of the Schrödinger–Newton equation.[4][5] Recent work indicates an increasingly deep inter-relation between quantum mechanics and gravitation.[6][7]

Accepting that wavefunctions are physically real, Penrose believes that matter can exist in more than one place at one time. In his opinion, a macroscopic system, like a human being, cannot exist in more than one place for a measurable time, as the corresponding energy difference is very large. A microscopic system, like an electron, can exist in more than one location significantly longer (thousands of years), until its space-time curvature separation reaches collapse threshold.[8][9]

In Einstein's theory, any object that has mass causes a warp in the structure of space and time around it. This warping produces the effect we experience as gravity. Penrose points out that tiny objects, such as dust specks, atoms and electrons, produce space-time warps as well. Ignoring these warps is where most physicists go awry. If a dust speck is in two locations at the same time, each one should create its own distortions in space-time, yielding two superposed gravitational fields. According to Penrose's theory, it takes energy to sustain these dual fields. The stability of a system depends on the amount of energy involved: the higher the energy required to sustain a system, the less stable it is. Over time, an unstable system tends to settle back to its simplest, lowest-energy state: in this case, one object in one location producing one gravitational field. If Penrose is right, gravity yanks objects back into a single location, without any need to invoke observers or parallel universes.[2]

Penrose speculates that the transition between macroscopic and quantum states begins at the scale of dust particles (the mass of which is close to a Planck mass). He has proposed an experiment to test this theory, called FELIX (free-orbit experiment with laser interferometry X-rays), in which an X-ray laser in space is directed toward a tiny mirror and fissioned by a beam splitter from tens of thousands of miles away, with which the photons are directed toward other mirrors and reflected back. One photon will strike the tiny mirror while moving to another mirror and move the tiny mirror back as it returns, and according to conventional quantum theories, the tiny mirror can exist in superposition for a significant period of time. This would prevent any photons from reaching the detector. If Penrose's hypothesis is correct, the mirror's superposition will collapse to one location in about a second, allowing half the photons to reach the detector.[2]

However, because this experiment would be difficult to arrange, a table-top version that uses optical cavities to trap the photons long enough for achieving the desired delay has been proposed instead.[10]

 

Saturday, June 03, 2023

Why Quantum Mechanics Is An Inconsistent Theory

wikipedia  | The Diósi–Penrose model was introduced as a possible solution to the measurement problem, where the wave function collapse is related to gravity. The model was first suggested by Lajos Diósi when studying how possible gravitational fluctuations may affect the dynamics of quantum systems.[1][2] Later, following a different line of reasoning, R. Penrose arrived at an estimation for the collapse time of a superposition due to gravitational effects, which is the same (within an unimportant numerical factor) as that found by Diósi, hence the name Diósi–Penrose model. However, it should be pointed out that while Diósi gave a precise dynamical equation for the collapse,[2] Penrose took a more conservative approach, estimating only the collapse time of a superposition.[3]

It is well known that general relativity and quantum mechanics, our most fundamental theories for describing the universe, are not compatible, and the unification of the two is still missing. The standard approach to overcome this situation is to try to modify general relativity by quantizing gravity. Penrose suggests an opposite approach, what he calls “gravitization of quantum mechanics”, where quantum mechanics gets modified when gravitational effects become relevant.[3][4][9][11][12][13] The reasoning underlying this approach is the following one: take a massive system well-localized states in space. In this case, being the state well-localized, the induced space–time curvature is well defined. According to quantum mechanics, because of the superposition principle, the system can be placed (at least in principle) in a superposition of two well-localized states, which would lead to a superposition of two different space–times. The key idea is that since space–time metric should be well defined, nature “dislikes” these space–time superpositions and suppresses them by collapsing the wave function to one of the two localized states.

To set these ideas on a more quantitative ground, Penrose suggested that a way for measuring the difference between two space–times, in the Newtonian limit, is

 

 

 

 

(9)

where is the Newtoninan gravitational acceleration at the point where the system is localized around . The acceleration can be written in terms of the corresponding gravitational potentials , i.e. . Using this relation in Eq. (9), together with the Poisson equation , with giving the mass density when the state is localized around , and its solution, one arrives at

 

 

 

 

(10)

The corresponding decay time can be obtained by the Heisenberg time–energy uncertainty:

 

 

 

 

(11)

which, apart for a factor simply due to the use of different conventions, is exactly the same as the time decay derived by Diósi's model. This is the reason why the two proposals are named together as the Diósi–Penrose model.

More recently, Penrose suggested a new and quite elegant way to justify the need for a gravity-induced collapse, based on avoiding tensions between the superposition principle and the equivalence principle, the cornerstones of quantum mechanics and general relativity. In order to explain it, let us start by comparing the evolution of a generic state in the presence of uniform gravitational acceleration . One way to perform the calculation, what Penrose calls “Newtonian perspective”,[4][9] consists in working in an inertial frame, with space–time coordinates and solve the Schrödinger equation in presence of the potential (typically, one chooses the coordinates in such a way that the acceleration is directed along the axis, in which case ). Alternatively, because of the equivalence principle, one can choose to go in the free-fall reference frame, with coordinates related to by and , solve the free Schrödinger equation in that reference frame, and then write the results in terms of the inertial coordinates . This is what Penrose calls “Einsteinian perspective”. The solution obtained in the Einsteinian perspective and the one obtained in the Newtonian perspective are related to each other by

 

 

 

 

(12)

Being the two wave functions equivalent apart for an overall phase, they lead to the same physical predictions, which implies that there are no problems in this situation, when the gravitational field has always a well-defined value. However, if the space–time metric is not well defined, then we will be in a situation where there is a superposition of a gravitational field corresponding to the acceleration and one corresponding to the acceleration . This does not create problems as far as one sticks to the Newtonian perspective. However, when using the Einstenian perspective, it will imply a phase difference between the two branches of the superposition given by . While the term in the exponent linear in the time does not lead to any conceptual difficulty, the first term, proportional to , is problematic, since it is a non-relativistic residue of the so-called Unruh effect: in other words, the two terms in the superposition belong to different Hilbert spaces and, strictly speaking, cannot be superposed. Here is where the gravity-induced collapse plays a role, collapsing the superposition when the first term of the phase becomes too large.

Master Arbitrageur Nancy Pelosi Is At It Again....,

🇺🇸TUCKER: HOW DID NANCY PELOSI GET SO RICH? Tucker: "I have no clue at all how Nancy Pelosi is just so rich or how her stock picks ar...