The Overlap Gap Property and Approximate Message Passing Algorithms for spin models
Abstract
We consider the algorithmic problem of finding a near ground state (near optimal solution) of a spin model. We show that for a class of algorithms broadly defined as Approximate Message Passing (AMP), the presence of the Overlap Gap Property (OGP), appropriately defined, is a barrier. We conjecture that when the model does indeed exhibits OGP (and prove it for the space of binary solutions). Assuming the validity of this conjecture, as an implication, the AMP fails to find near ground states in these models, per our result. We extend our result to the problem of finding pure states by means of Thouless, Anderson and Palmer (TAP) based iterations, which is yet another example of AMP type algorithms. We show that such iterations fail to find pure states approximately, subject to the conjecture that the space of pure states exhibits the OGP, appropriately stated, when .
1 Introduction
Given an tensor of order and an vector , define the usual inner tensor product by
(1) 
Consider the associated normalized variational problem over the binary cube :
(2) 
The case when consists of i.i.d. zero mean Gaussian random entries with variance that is corresponds to the problem of finding a ground state of a spin model with Gaussian couplings and the (unique) vector achieving the minimization value is called the ground state [Pan13]. The choice of variance and the normalization is dictated by the associated Gibbs distribution defined by assigning probability weight proportional to to each for some fixed inverse temperature parameter . In this case the partition function
is well approximated by as increases and is known to converge to a strictly negative limiting value with high probability (w.h.p.) as . For us though the details of the choice of scaling are immaterial and the variational problem above is equivalent to the case when consists of i.i.d. standard normal entries and the normalization is skipped. Another standard assumption in the literature is to assume a symmetry of , for example assuming that entries are fully determined by i.i.d. entries corresponding to . This difference is again immaterial. Indeed, consider the tensor defined by
where is defined by
for any permutation of . Note that is symmetric and satisfies for every .
In the present paper we focus on the algorithmic question of solving the minimization problem (2) approximately and efficiently (in polynomial time). That is, the question is one of existence of a polynomial time algorithm which for every produces a sequence of solution satisfying
as , ideally w.h.p. as . This problem was successfully solved recently by Montanari [Mon18] in the case of the SherringtonKirkpatrick model, which is the special case corresponding to . The result though assumes the validity of a (widely believed) conjecture that the overlap distribution function is strictly increasing. In particular, it assumes the absence of an interval inside the support of the overlap distribution with zero mass, namely that the overlap distribution does not exhibit the Overlap Gap Property (OGP). The algorithm is based on a variant of Approximate Message Passing (AMP) type algorithms, which in the context of spin glasses is well motivated by the socalled Thouless, Anderson and Palmer (TAP) equation describing the magnetization of spins in spin glass models. AMP, as a class of algorithms was also found to be one of the most effective classes of algorithm in many models of signal processing [Kab03, DMM09, Bol14, BM11, JM13, BLM15, BMN17], specifically models involving a ”planted signal” (which does not apply to our spin model). The algorithmic result of [Mon18] in its order was inspired by a similar result by Subag [Sub18] regarding the problem of finding a near ground states in a spherical“mixed spin” model. Here one considers a linear combination of objectives of the form (1) as one varies , with the coefficients being fixed, and optimizing over the unit sphere instead of . Here a polynomial time constrictions of near optimal solutions is provided, under the assumption that the model does not exhibit OGP (see part (2) of Proposition 1 in [Sub18]). For the case of spherical models, the necessary and sufficient conditions for the OGP are known [CS17, JT17a, Tal06]. Both spin and spherical spin models are related to the Random Energy Model (REM) considered from the algorithmic perspective by AddarioBerry and Maillard [ABM18], where in contrast to [Mon18] and [Sub18] algorithmic hardness is established away from the ground state value. One should note, however that REM is an oracle based optimization problem and thus does not fit classical input size based algorithmic complexity questions arising in the context of spin and spherical spin models.
At the same time it is known that the OGP does take place in spin models when , as was established in [CGP19, Theorem 3]. In particular, it was shown that for every even , there exists such that w.h.p. for every pair of solutions satisfying for , the associated normalized overlap, defined simply as the normalized absolute value of the inner product is either at most or at least some . This naturally raises the question as to whether the OGP creates a barrier to the success of AMP when . The main result of our paper, Theorem 3.3, is to establish precisely this fact under the assumption that a certain relaxed version of the OGP takes place when . The relaxed version concerns the optimization problem when is relaxed to be in Hilbert cube , and otherwise is defined in the same way as for . This relaxed version would be a rather straightforward implication of the OGP for binary solutions if one could show that every nearly optimal solution in is nearly binary. Unfortunately, even this fact is not known, and we leave it as an interesting, though, as we believe, an approachable open problem. As a consequence of our main result, we show that extension of the AMP result of [Mon18] to the case is not possible. As another implication, we show that a natural iterative scheme of computing the fixed point of the TAP equations fails as well in the case . We note, that this iterative scheme is known to succeed in the high temperature regime due to the result of [Bol14]. Another important class of algorithms ruled out by our negative result is gradient descent type algorithms. Since the gradient of is a linear combination of the vectors of the form defined below in (3), then a discrete implementation of the gradient descent algorithm in the form for some step choices is also a special case of our class of AMP algorithms.
One challenge in establishing our result formally is the formalization of the class of AMP algorithms to begin with. Unfortunately, there is no one formal definition for it, but rather there is a vaguely proposed scheme for a class of iterations inspired by the Belief Propagation type algorithms. The iterations take the form and are performed for some constant number of rounds , where is an in general dependent function involving vector defined by
(3) 
for any , as well other nonlinear operators defined typically through some kind of univariate or variant nonlinear functions applied coordinatewise in some way. We note that (3) is simply a matrix vector product when .
Thus, as a first step, we introduce a precise class of such iterative algorithms (functions) , and associate it with a precise set of assumptions. We show separately that the algorithm of [Mon18] is a special case. We assume that the results of each iterations are truncated so that the resulting vector always belongs to bounded region of the form for some constant . The rational for the truncation is as follows. In the implementation of the AMP the iterations produce a real valued vector which is then projected to a vector in in a way discussed below. The idea here is that is a vector which is ”close enough” to some vector which is a nearground state. In particular, the typical entries of are ”not too far” from interval , and in particular are bounded by . We restrict every vector to be in for technical convenience. The rounding scheme assumed to be adopted by our class of AMP is similar the one that was used in [Mon18]: first is projected to vector via a natural truncation , and then some rounding scheme is adopted by the algorithm designer, which is guaranteed asymptotically to never lower the quality of the solution, that is, it guarantees that . Our main result, stated more precisely, says that for any AMP algorithm thus defined, the vector is w.h.p. suboptimal, namely exceeds by some fixed constant , w.h.p. as . Thus we establish that the vector obtained in the penultimate (before ) step of the AMP is suboptimal. We note that it is precisely this vector which is shown to be nearly optimal in the case in the argument of [Mon18]. The last step of converting a real vector to is just used there in order to obtain a genuinely binary vector. We do not establish that the ultimate vector is suboptimal, and this is a limitation of our technique. We note however that showing near optimality of without showing near optimality of would amount to believing that the rounding is somehow mysteriously capable of producing near optimal binary solution from a presumably far from optimal fractional solution , which is something which does not seem to be plausible, and something which not established in [Mon18]. Nevertheless, it would be admittedly a more complete result to show directly that is far from optimal, without assuming the same for , but we are currently unable to make this argument rigorous, and leave it for further investigation.
Proof of the main result. Outline
We now describe the main ingredients of our proof. First, as a consequence of a result established in [CGP19], we show that the OGP holds w.h.p. not just for one instance of , but for a continuous family of sets of the form where is an independent instance of . Note that for each fixed , the corresponding tensor has the same distribution as . An easy consequence of the OGP result in [CGP19] and the chaos result of [CHL18], is the fact, which we prove in this paper (Theorem 3.4), that the OGP holds for as well in the sense that for any two and any satisfying , it is again the case that , for the same values . The chaos property roughly speaking says for any fixed near optimal solutions of and are nearly orthogonal, see Theorem 3.5 below. Our main conjecture regarding OGP (Conjecture 3.2), which we use as an assumption of the main result, is the conjecture that OGP holds in fact for near optimal solutions in as opposed to those in , for the same family of instances. Establishing this conjecture is an interesting open question.
Our main ingredient of the proof is then to show that the iterations as function of are sufficiently ”continuous” to perturbation of the entries of . Specifically, we obtain an upper bound on for the interpolation scheme which is sufficiently continuous in . This result is the subject of Theorem 5.1. A straightforward implication is that the same bound applies to , where, as we recall is projection of through the truncation . Separately, we use the independence of and to argue the near orthogonality of and . The continuity result above then is used to show that for an appropriate choice of , it will hold that . The (conjectured) OGP property implies that this choice of corresponds to a sufficiently suboptimal solution , which contradicts concentration property of , which we establish separately using standard techniques, including Gaussian concentration of measure and Kirszbraun’s Theorem.
Prior results on OGP and algorithmic implications
The concept of OGP originates in the study of spin glass models, specifically the study of overlap distribution of replicas generated according to some associated Gibbs distribution, such as the one described above. Understanding the limiting distribution of overlaps is of an utmost importance to spin glass theory and has recieved significant attention [AC15, ACZ17, JT17b, JT18]. The first connections between to study of the overlaps and algorithms were made in the context of random constraint satisfaction problems, such as random KSAT problem and many other similar problems. These problems exhibit an ”infamous” gap between the range of parameters for which a satisfying assignment exists vs those for which solutions can be found in polynomial time. The apparent hardness was linked conjecturally to the clustering (shattering) property of these models which were discovered to appear roughly in the regime where known polynomial time algorithms fail [ACO08, ACORT11, MMZ05, MPZ02, COE11]. The clustering property says, roughly speaking, that a large part of the set of satisfying assignments can be partitioned into clusters separated by Hamming distance, which is of the order of the size of the model itself. It is notable that the proof technique used to establish such a clustering property actually shows something more: the overlaps between pairs of typical (random in some appropriate sense) satisfying assignments lie in a disconnected union of intervals . Thus the set of solutions is disconnected not only with respect to its ambient metric space, but also with respect to its onedimensional projection onto the set of possible overlap values. The proof technique relies on fairly standard application of the moment method. The disconnectivity of overlaps (that is the presence of the overlap gaps of the form ) was later used as an obstruction to a class of local algorithms defined as socalled Factors of IID in [GS17a, RV17, GS17b, CGP19], and for random walk type algorithms in [COHH17]. It is this line of work which the closest in spirit to the present one, as one can think of AMP as a natural definition for ”local” algorithms defined on dense instances – instances not defined on sparse graphs and hypergraphs.
It is important to note that while OGP implies the clustering property, the converse in general is not true. Indeed if the OGP takes place then one can partition the set of all solutions of interest into those which have overlap at least with some arbitrarily marked solution (thus marked ”Cluster 1”), vs solutions with overlap at most with (thus marked ”other clusters”), leading to a set of at least two clusters separated by a significant distance. On the other hand, one can easily create a subset of for which the set of all overlaps spans the entire interval , though at the same time admits clustering partition.
The OGP was further established for some other models, some involving planted signals [GL18, DI17, GJS19]. It was shown in [GJS19] to be an obstruction to Glauber Dynamics type algorithms by showing that OGP implies the existence of a free energy well, a property which was shown to be a barrier for Markov chain type algorithms in problems involving planted signals [BAGJ18a]. A related notion of free energy barriers associated with these gaps were also shown to be obstructions for local Markov chain type algorithms for problems of the class considered here in [BAJ18], where it was also shown that these free energy barriers occur in a broad class of models including both the spin and spherical spin models. It can be shown that OGP implies the existence of a free energy barrier at sufficiently low temperatures. It is of interest to establish the broadest class of algorithms for which OGP is a provable barrier.
The remainder of the paper is structured as follows. In the next section we introduce the formalism of the AMP algorithms. In Section 3 we give the definition of the OGP, state the corresponding conjecture and state our main result. The validity of OGP for binary solutions is proven in the same section. Some preliminary technical results are established in Section 4. Our main technical result is Theorem 5.1 which is stated and proven in Section 5. We note that it is a purely deterministic result showing that the output of the AMP depends on the values of the tensor sufficiently continuously. In Section 6 we establish the concentration property of the solution produced by the AMP around its expectation. Our main theorem is proven in Section 7. In Section 8 we consider TAP solutions and show that a natural class of iterations suggested by TAP fails to find the fixed point of TAP, modulo the same Conjecture 3.2, since the iterations are a special case of the class of AMP algorithms we define. This result is a direct implication of our main result, Theorem 3.3. It contrasts with the positive result of Bolthausen [Bol14], which establishes that these iterations do converge to the solution of TAP equations in the hightemperature setting. In Section 9 we verify that the AMP algorithm constructed in [Mon18] also fit the general definition of AMP introduced in this paper. Finally, we conclude in Section 10 where we state some open questions.
2 Approximate Message Passing iterations formalism
denotes inner product of vectors . For any tensor denotes the Frobenius norm , and denotes the operator norm
where the maximum is over all . By CauchySchwartz inequality .
Throughout the paper denotes size order tensor consisting of i.i.d. entries. For any let
so that for any , as in (1). Here . For any we also introduce
(4) 
defined by
Similarly, for any we write instead of for short. We recall the definition of from (2). Observe that we may view as a centered Gaussian process indexed by , which has covariance
In particular, for any with . The following concentration result is then an immediate consequence of the BorellTIS inequality, Theorem 2.1.11 of [AT09].
Theorem 2.1.
For every
for all sufficiently large .
A major consequence of the development in spin glass theory is the existence of the limit
(5) 
which by Theorem 2.1 also implies that the limit holds w.h.p. as .
We now introduce a set of assumptions which are used to define a class of AMP algorithms. Fix a positive integer and an . Consider two sequences of functions and .
Assumption 2.2.
. Furthermore, functions are Lipschitz continuous on their respective domains. More precisely, there exists such that for all ,
(6)  
(7) 
The assumption (7) says that the function is Lipschitz on an infinite rectangle This will be required due to the special role played by the first variable of .
Fix a positive constant . Let denote an truncation for any . When is a vector, is assumed to be applied coordinatewise. We now define the iterations forming the basis of AMP. Fix and define the sequence as follows
(8) 
where and are applied componentwise. In other words, in step , first a vector is formed by applying coordinatewise (recall that the domain of is ). Then this vector is used to define vector via (3). This vector is concatinated with prior vectors to form an matrix . Then function is applied coordinatewise. Finally the truncation is applied to each of the coordinates of the vector thus obtained, resulting in .
We now describe an algorithm which uses AMP to generate a solution in . For this purposes we assume that the algorithm designer has access to some (computable) projection function . We discuss this further below.
Algorithm 2.3 (AMP Algorithm).
The algorithm is parametrized by .

Input .

Compute using (8).

Project to by applying transformation , coordinatewise. Denote the resulting vector by .

Output .
In some sense the details of the projection are immaterial to us since our negative result will be concerned with the quality of the solution itself and not its projection. Nevertheless, for completeness we describe now the projection used in ([Mon18]), which we denote by . The projection was defined only for which was the case of interest. But it is straightforward to extend the idea to the case of general . Set . For , construct by making all coordinates of to be the same as of , and setting the th coordinate of to be the sign opposite of
(9) 
In particular, the first coordinates of are , but the remaining coordinates are real valued in general. Set .
Let us comment on the meaning and motivation behind the steps of the AMP algorithm above and also the motivation behind the projection described above and used in ([Mon18]). The idea is that when the AMP algorithm succeeds, the vector , while not being an element of the binary cube , should be nearly optimal in the sense that
and should not be too far from , so that the projecting to does not change the objective value significantly. That is . Next one observes that effectively rounds to a vector in in such a way that the objective value only decreases asymptotically. This is verified by observing that for each coordinate , the dependence of on variable is linear in , except for terms with repeating coordinates (i.e. such that for some ). Since and thus , the linearity allows to round to or while only decreasing the objective value. This is done trivially by setting to be the sign opposite of the one of the multiplier of , which is (9). This is done iteratively over all coordinates. The terms corresponding to repeating coordinates are easily shown to be of lower order of magnitude than the objective value. As a result one obtains a vector satisfying
But since belongs to the solution space itself (the binary cube ), it must be the case that in fact
and thus the success of AMP is validated. Importantly, the near optimality of is argued from the near optimality of itself. This discussion is of key essence to the main result of our work, which is stated in the next section.
3 The OGP conjecture and the main result
Consider an arbitrary set of tensors .
Definition 3.1.
The set satisfies the Overlap Gap Property (OGP) with domain , and parameters if for every pair and every satisfying
it holds
(10) 
Namely, every pair of nearly (close) optimal solutions with respect to any two members of cannot have normalized inner product in the interval .
Consider two independent random tensors and in both with i.i.d. entries. Introduce the interpolated set of tensors with varying in . Note that for each fixed , is distributed as . Our main conjecture regarding the OGP concerns the set .
Conjecture 3.2.
For every even here exists such that described above satisfies the OGP with domain and parameters , with probability at least , for some for all sufficiently large . Furthermore, for every and every satisfying , it holds with probability at least for some and all large .
Our main result stated below assumes the validity of this conjecture. To state this result, let us introduce the following. Let denote the space of probability measures on . Let denote the output of the first two steps of Algorithm 2.3 after steps with coefficient matrix and initial data , where the entires of are i.i.d. . Then the following holds.
Theorem 3.3.
Thus we argue the failure of the AMP to find a vector which is a near optimizer of . As discussed earlier, this is a negative result regarding the performance of AMP, since finding such near optimal is a key step towards finding a near optimal member of the binary cube . Ideally, one would establish that the vector obtained from via any projection scheme, such as the one described above is also away from optimality. Unfortunately, our proof technique stops short of that due to the potential sensitivity of the sign function used on obtaining to perturbation of , thus potentially violating stability used crucially in the proof of our main result. We leave this as an interesting open question.
A partial support to the validity of Conjecture 3.2 above is its validity for the domain as we now establish.
Theorem 3.4.
For every even here exists such that described above satisfies the OGP with domain and parameters , with probability at least , for some for all sufficiently large . Furthermore, for every and every satisfying , it holds with probability at least for some and all large .
Proof.
We note that in the case , since for each , the requirement (10) in definition of OGP simplifies to
It was established in [CGP19], Theorem 3 that the OGP holds for a single instance of a tensor , i.e. , with probability at least for some and all large . At the same time the following chaos property was established in [CHL18]:
Theorem 3.5 ([Chl18], Theorem 2).
For every and there exists such that with probability , for every satisfying it holds .
We now combine these two results. We first claim that it suffices to establish OGP for a discrete finite subsets. Fix such that is an integer and consider for . We assume OGP holds for this set for some for every sufficiently small such . Now for any
In light of concentration bound of Theorem 2.1, for any we can find small enough so that
with probability at least , for some and large . This means that modulo exponentially small probability, every satisfying also satisfies . Thus if then the set satisfies OGP with and the same , provided that the discrete set satisfies OGP with . Thus we now prove OGP for this discrete set.
Let be OGP parameters for a single instance . By the union bounds over , the OGP holds for each modulo exponentially small probability. Fix . Applying Theorem 3.5, we find so that the theorem claim holds for and . By union bounds this also holds for all pairs modulo exponentially small probability. Then OGP holds for by considering separately the cases and , where in the latter case for every satisfying we simply have .
The second part of the theorem follows immediately from the chaos property of Theorem 3.5 in the special case . ∎
Conjecture 3.2 would follow from Theorem 3.4 if we could establish that every nearly optimal solution in is actually close to a point in . This is quite plausible as one does not expect nearly optimal solutions to exist ”deep” inside the Hilbert cube . Unfortunately, we are not able to show this and thus state it as an interesting open problem.
Conjecture 3.6.
Suppose the entries of are generated i.i.d. according to . For every there exists such that with probability at least for some and large enough , every satisfying also satisfies .
4 Preliminary technical results
In this section we establish several preliminary results. We begin by recalling the following operator norm bound
Lemma 4.1.
There exist constants , such that
for all sufficiently large .
Proof.
The proof of this result is verbatim that from [BAGJ18b, Lemma 4.7]. We include this for completeness.
Let denote the unit  ball. We may then view as a centered Gaussian process on , with covariance
This process is rotationally invariant. Fix an , let denote an net for with respect to norm, and let denote is fold cartesian product. By multilinearity of ,
If we choose so that , we have
To bound the right hand side, note that for any fixed is a centered Gaussian with variance so that
where in the second line, is any point in , denotes its cardinality, and the final inequality comes from a Gaussian tail bound since is a centered Gaussian with variance . Note furthermore that we may choose this net so that [Ver18, Lemma 5.1]. Thus by rotation invariance and a union bound, we see that
Choosing sufficiently large yields the result. ∎
Proposition 4.2.
There exists which depend on such that
for all sufficiently large .
Proof.
By multilinearity of , the triangle inequality, and the definition of the operator norm,
Since , it follows that . Applying the bound from Lemma 4.1, we obtain the result. ∎
Lemma 4.3.
There exist such that with probability for every at least for all large the following holds: for every every satisfying also satisfies .
5 Continuous dependence
When we view Algorithm 2.3 as a discrete time dynamical system, it is natural to expect that this admits a similar dependence on the tensor as a timeinhomogenous differential equation of the same form. Thus our proof of continuous dependence of iterations on the tensor can be viewed as a discrete analogue of similar standard result for differential equations, see, e.g., [Tes12, Section 2.4].
Given any tensor , let
(11) 
We now state the main result of this section.