• No results found

Distinguishing power-law uniform random graphs from inhomogeneous random graphs through small subgraphs

N/A
N/A
Protected

Academic year: 2021

Share "Distinguishing power-law uniform random graphs from inhomogeneous random graphs through small subgraphs"

Copied!
27
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

arXiv:2102.09315v1 [math.CO] 18 Feb 2021

Distinguishing power-law uniform random graphs from

inhomogeneous random graphs through small subgraphs

Clara Stegehuis

Department of Electrical Engineering, Mathematics and Computer Science,

University of Twente

February 19, 2021

Abstract

We investigate the asymptotic number of induced subgraphs in power-law uni-form random graphs. We show that these induced subgraphs appear typically on vertices with specific degrees, which are found by solving an optimization problem. Furthermore, we show that this optimization problem allows to design a linear-time, randomized algorithm that distinguishes between uniform random graphs and random graph models that create graphs with approximately a desired degree sequence: the power-law rank-1 inhomogeneous random graph. This algorithm uses the fact that some specific induced subgraphs appear significantly more often in uniform random graphs than in rank-1 inhomogeneous random graphs.

1

Introduction

Many networks were found to have a degree distribution that is well approximated by a power-law distribution with exponent τ ∈ (2, 3). These power-law real-world networks are often modeled by random graphs: randomized mathematical models that create networks. One of the most natural random graph models to consider is the uniform random graph [14,

16]. Given a degree sequence, the uniform random graph samples a graph uniformly at random from all possible graphs with exactly that degree sequence.

The most common way to analyze uniform random graphs, is to analyze the configuration model, another random graph model that is easier to analyze, instead [2]. The configuration model creates random multigraphs with a specified degree sequence, i.e., graphs where multiple edges and self-loops can be present. When conditioning on the event that the configuration model results in a simple graph, it is distributed as a uniform random graph. If the probability of the event that the configuration model results in a simple graph is sufficiently large, it is possible to translate results from the configuration model to the uniform random graph. In the case of power-law degrees with exponent τ ∈ (2, 3) however, the probability of the configuration model resulting in a simple random graph vanishes, so that the configuration model cannot be used as a method to analyze power-law uniform random graphs [11]. In this setting, uniform random graphs need to be analyzed directly instead. However, analyzing uniform random graphs is in general complex: the presence of

(2)

edges are dependent, and there is no simple algorithm for constructing a uniform random graph with power-law degrees.

Several other random graph models that are easy to generate, create graphs with ap-proximately the desired degree sequence. The most prominent such models are rank-1 inhomogeneous random graphs [3,1,4]. In these models, every vertex is equipped with a weight, and pairs of vertices are connected independently with a probability that is a func-tion of the vertex weights. Another such model is the erased configurafunc-tion model, which erases all multiple edges and self-loops in the configuration model [3]. As these models are easy to generate and easy to analyze, they are often analyzed as a proxy for random graphs with a desired degree sequence.

In this paper, we investigate induced subgraphs in uniform random graphs. Several special cases of subgraph counts in uniform random graphs have been analyzed before, such as cycles [16, 8]. However, existing results often need a bound on the maximal degree in the graph or the assumption that all degrees are equal, which does not allow for analyzing power-law random graphs with τ ∈ (2, 3). Recently, triangles in uniform power-law random graphs have also been analyzed [5]. In this paper, we investigate the subgraph count of all possible induced subgraphs by using a recent method based on optimization models [10,7] which enabled to analyze subgraph counts in erased configuration models and preferential attachment models. We combine this method with novel estimates on the connection prob-abilities in uniform random graphs [6] to obtain an optimization model that finds the most likely composition of an induced subgraph of a power-law uniform random graph. This method allows us to localize and enumerate all possible induced subgraphs.

We then use this optimization problem to design a randomized algorithm that distin-guishes two types of rank-1 inhomogeneous random graphs from uniform random graphs in linear time. Interestingly, this shows that approximate-degree power-law random graphs are fundamentally different in structure from power-law uniform random graphs. Indeed, there are subgraphs that appear significantly more often in uniform random graphs than in these rank-1 inhomogeneous random graphs. Furthermore, the optimization problem that we use to prove results on the number of subgraphs allows to detect these differences in linear time, while subgraph counting in general cannot be done in linear time.

We first introduce the uniform random graph and the induced subgraph counts in Sec-tion1. Then, we present our main results on subgraph counts in the large network limit in Section2. After that, we discuss the implications of these results for distinguishing uniform random graphs from inhomogeneous random graphs in Section 2.2. We then provide the proofs of our main results in Sections 3-6.

Notation. We denote [k] = {1, 2, . . . , k}. We say that a sequence of events (En)n≥1

happens with high probability (w.h.p.) if limn→∞P(En) = 1 and we use−→ for convergenceP in probability. We write f (n) = o(g(n)) if limn→∞f (n)/g(n) = 0, and f (n) = O(g(n)) if |f(n)|/g(n) is uniformly bounded. We write f(n) = Θ(g(n)) if f(n) = O(g(n)) as well as g(n) = O(f (n)). We say that Xn = OP(g(n)) for a sequence of random variables (Xn)n≥1

if|Xn|/g(n) is a tight sequence of random variables, and Xn= oP(g(n)) if Xn/g(n) P

−→ 0. Uniform random graphs. Given a positive integer n and a graphical degree sequence: a sequence of n positive integers d = (d1, d2, . . . , dn), the uniform random graph (URG(n)(d))

is a simple graph, uniformly sampled from the set of all simple graphs with degree sequence (di)i∈[n]. Let dmax = maxi∈[n]di and Ln = Pni=1di. We denote the empirical degree

(3)

distribution by Fn(j) = 1 n X i∈[n] 1{di≤ j}. (1.1)

We study the setting where the variance of d diverges when n grows large. In particular, we assume that the degree sequence satisfies the following assumption:

Assumption 1.1 (Degree sequence).

(i) There exist τ ∈ (2, 3) and constants K1, K2> 0 such that for every n≥ 1 and every

0≤ j ≤ dmax,

K1j1−τ ≤ 1 − Fn(j)≤ K2j1−τ. (1.2)

(ii) There exist τ∈ (2, 3) and a constant C > 0 such that, for all j = O(√n),

1− Fn(j) = Cj1−τ(1 + o(1)). (1.3)

It follows from (1.2) that

dmax< M n1/(τ −1), for some sufficiently large constant M > 0. (1.4)

Furthermore, Assumptions(i)and(ii)together show that lim n→∞ 1 n n X i=1 di= lim n→∞ Ln n = µ <∞, (1.5) for some µ > 0.

2

Main results

We now present our main results. Let H = (VH,EH) be a small, connected graph. We are

interested in the induced subgraph count of H, the number of subgraphs of URG(n)

(d) that are isomorphic to H. Let URG(n)

(d)|v denote the induced subgraph obtained by restricting

URG(n)

(d) to vertices v. We can write the probability that an induced subgraph H with |VH| = k is created on k uniformly chosen vertices v = (v1, . . . , vk) in URG(n)(d) as

P URG(n) (d)|v= H = X d′ P URG(n) (d)|v= H | dv= d′ P dv = d′ , (2.1)

where the sum is over all possible degrees on k vertices d′ = (d′

i)i∈[k], and dv = (dvi)i∈[k]

denotes the degrees of the randomly chosen set of k vertices. Recently, it has been shown that in erased configuration models, there is a specific range of d′

1, . . . , d′k that gives the

maximal contribution to the amount of subgraphs of those degrees, sufficiently large to ignore all other degree ranges [10]. In this paper, we show that also (2.1) is optimized for specific ranges d′

1, . . . , d′k that depend on the subgraph H.

Furthermore, we show that when (2.1) is maximized by a unique range of degrees, there are only four possible ranges of degrees that maximize the term inside the sum in (2.1). These ranges are constant degrees, or degrees proportional to n(τ −2)/(τ −1), to √n or to n1/(τ −1). Interestingly, these are the same ranges that contribute to the erased configuration model [10]. However, the optimal distribution of the subgraph vertices over these ranges may be different in the erased configuration model and the uniform random graph.

(4)

2.1

Optimizing the subgraph degrees

We now present the optimization problems that maximizes the summand in (2.1) for induced subgraphs. Let H = (VH,EH) be a small, connected graph on k≥ 3 vertices. Denote the

set of vertices of H that have degree one inside H by V1. LetP be all partitions of VH\ V1

into three disjoint sets S1, S2, S3. This partition into S1, S2 and S3 corresponds to the

optimal orders of magnitude of the degrees in (2.1): S1 is the set of vertices with degree

proportional to n(τ −2)/(τ −1), S

2 the set with degrees proportional to n1/(τ −1), and S3 the

set of vertices with degrees proportional to √n. We then derive an optimization problem that finds the partition of the vertices into these three orders of magnitude that maximizes the contribution to the number of induced subgraphs. When a vertex in H has degree 1, its degree in URG(n)

(d) is typically small, i.e., it does not grow with n.

Given a partition P = (S1, S2, S3) of VH \ V1, let ESi denote the set of edges in H

between vertices in Si and ESi =|ESi| its size, ESi,Sj the set of edges between vertices in

Si and Sj and ESi,Sj =|ESi,Sj| its size, and finally ESi,V1 the set of edges between vertices

in V1 and Si and ESi,V1 =|ESi,V1| its size. We now define the optimization problem that

optimizes the summand in (2.1) as B(H) = max P h |S1| + 1 τ− 1|S2| (2 − τ − k + |S1| + k1) −2ES1− 2ES2+ ES1,S3− ES2,S3+ ES1,V1− ES2,V1 τ− 1 i . (2.2) Let S∗

1, S2∗, S∗3 be a maximizer of (2.2). Furthermore, for any (α1, . . . , αk) such that

αi∈ [0, 1/(τ − 1)], define

Mn(α)(ε) ={(v1, . . . , vk) : dvi ∈ [ε, 1/ε](µn)

αi

∀i ∈ [k]}. (2.3) These are the sets of vertices (v1, . . . , vk) such that dv1 is proportional to n

α1 and d

v2

proportional to nα2 and so on. Denote the number of subgraphs with vertices in M(α)

n (ε)

by N (H, Mn(α)(ε)). Define the vector α as

αi=          (τ − 2)/(τ − 1) i ∈ S∗ 1, 1/(τ − 1) i∈ S∗ 2, 1 2 i∈ S∗3, 0 i∈ V1. (2.4)

The next theorem shows that sets of vertices in Mα

n(ε) contain a large number of

sub-graphs, and computes the scaling of the number of induced subgraphs:

Theorem 2.1 (General induced subgraphs). Let H be a subgraph on k vertices such that the solution to (2.2) is unique.

(i) For any εn such that limn→∞εn= 0,

N H, Mn(α)(εn)

N (H)

P

−→ 1. (2.5)

(ii) Furthermore, for any fixed 0 < ε < 1, N (H, Mn(α)(ε))

(5)

and

N (H, Mn(α)(ε))

n3−τ2 (k2++B(H))+k1/2 ≥ ˜f (ε) + oP(1), (2.7)

for some functions f (ε), ˜f (ε) <∞ not depending on n. Here k2+ denotes the number

of vertices in H of degree at least 2, and k1 the number of degree-one vertices in H.

Thus, Theorem2.1(i) shows that asymptotically, all induced subgraphs H have vertices in Mα

n(ε), and Theorem2.1(ii) then computes the scaling in n of the number of such induced

subgraphs.

Now we study the special class of induced subgraphs for which the unique maximum of (2.2) is S∗

3 = VH. By the above interpretation of S1∗, S2∗and S3∗, these are induced subgraphs

where the maximum contribution to the number of such subgraphs is from vertices with degrees proportional to √n in URG(n)

(d). For such induced subgraphs, we can obtain the detailed asymptotic scaling including the leading constant:

Theorem 2.2 (Induced subgraphs with √n degrees). Let H be a connected graph on k vertices with minimal degree 2 such that the solution to (2.2) is unique, and B(H) = 0. Then, N (H) nk2(3−τ ) P −→ A(H) < ∞, (2.8) with A(H) = C(τ − 1) µ(τ −1)/2 kZ ∞ 0 · · · Z ∞ 0 (x1· · · xk)−τ Y {i,j}∈EH xixj 1 + xixj Y {u,v} /∈EH 1 1 + xuxvdx1· · · dxk. (2.9) Optimal induced subgraph structures. Interestingly, Theorem 2.1 implies that the number of copies of a specific induced subgraph H is dominated by the number of copies in which its vertices embedded in URG(n)

(d) have specific degrees, determined by maximiz-ing (2.2). First restricting to these degrees, and then analyzing the subgraph count, allows to obtain the scaling of the number of induced subgraphs in power-law uniform random graphs where the analysis method by using the configuration model breaks down. Further-more, this does not only give us information on the total number of subgraphs, but also on where in the graph we are most likely to find them (i.e., on which degrees).

Automorphisms of H. An automorphism of a graph H is a map VH 7→ VH such that

the resulting graph is isomorphic to H. In Theorem2.2 we count automorphisms of H as separate copies, so that we may count multiple copies of H on one set of vertices and edges. Therefore, to count the number of induced subgraphs without automorphisms, one should divide the results of Theorem2.2by the number of automorphisms of H.

2.2

Distinguishing uniform random graphs from rank-1

inhomoge-neous random graphs

Uniform random graphs create random networks that are uniformly sampled from all graphs with precisely a desired degree sequence. However, in the power-law degree range with τ ∈ (2, 3), it is difficult to generate such graphs, as the method that generates a configuration

(6)

model until a simple graph is obtained does not work anymore [11]. Therefore, random graph models that generate networks with approximately a desired degree sequence are often used as a proxy for uniform random graphs instead, as many of these models are easy to generate. One such model is the rank-1 inhomogeneous random graph [4,1]. In the inhomogeneous random graph, every vertex i is equipped with a weight wi. We here assume that these

weights are sampled from a power-law distribution with τ ∈ (2, 3). Then, several choices of the connection probability p(wi, wj) are possible. Common choices are [4,1]

p(wi, wj) = min wiwj µn , 1  , (2.10) p(wi, wj) = e−wiwj/(µn), (2.11) p(wi, wj) = wiwj wiwj+ µn , (2.12)

where µ denotes the average weight. By choosing the connection probabilities in this man-ner, the degree of vertex i is approximately wi with high probability [9].

Theorems 2.1 and 2.1 indicate that in terms of induced subgraphs, the uniform ran-dom graph produces the same results as the rank-1 inhomogeneous ranran-dom graph with connection probabilities as in (2.12). Intuitively, this can be seen from the constant A(H) in Theorem 2.2, where this connection probability appears in a scaled form. Indeed, our proofs are based on the fact that in the uniform random graph, the probability that two vertices form a connection can be approximated by 2.12(see Lemma3.1). Therefore, The-orems 2.1 and 2.1 also hold for rank-1 inhomogeneous random graphs with connection probability (2.12).

In [10, 15], similar theorems for random graphs with connection probability (2.11) and (2.10) were derived. The number of all induced subgraphs in the model with con-nection probability (2.10) has the same scaling in n as the number of induced subgraphs in the models with connection probability (2.11). However, the scaling in n of the number of copies of some induced subgraphs in models generated from (2.11) and (2.10) may be different from the scaling in the uniform random graph. The smallest such subgraphs are of size 6, and are plotted in Figure 1. Figure 1 shows that these two subgraphs appear significantly more often in the uniform random graph than in the inhomogeneous random graphs.

Interestingly, this means that rank-1 inhomogeneous random graphs and uniform ran-dom graphs can be distinguished by studying small subgraph patterns of size 6. Previous results showed that random graphs with connection probability (2.12) are distinguishable from those generated by connection probabilities (2.11) and (2.10) by their maximum clique size, which differs by a factor of log(n) [12]. However, finding the largest clique is an NP-hard problem [13], while this method only needs subgraphs of size 6 as an input, which can be detected in polynomial time. Furthermore, the difference between the amounts of the induced subgraphs of Figure1is not a logarithmic factor, but a polynomial factor, making it easier to detect such differences.

Specifically, we can show that in only O(n) time, it is possible to distinguish be-tween power-law uniform random graphs and the approximate-degree random graph models of (2.10) and (2.11) with high probability:

Theorem 2.3. There exists a randomized algorithm that distinguishes power-law uniform random graphs from power-law rank-1 inhomogeneous random graphs with connection proba-bilities (2.10) or (2.11) in time O(n) with accuracy at least 1−nγe−cnβ

(7)

(a) n3(3−τ )+τ −11 (b) n3(3−τ ) (c) n(4−τ −11 )(3−τ ) (d) n3(3−τ )

n1/(τ −1)

n n(τ −2)/(τ −1)

1

Figure 1: Two induced subgraphs on 6 vertices with their scaling in n and optimal degrees the uniform random graph from Theorem 2.1 (Figures (a) and (c)) and inhomogeneous random graphs with connection probabilities (2.10) and (2.11) obtained from [10] (Figures (b) and (d)).

We will prove this theorem in Section6, where we also introduce the randomized algo-rithm that distinguishes between these two random graph models. This algoalgo-rithm is based on the subgraph displayed in Figure1(c) and (d). It first selects vertices that have degrees close to n1/(τ −1) and n(τ −2)/(τ −1), and then randomly searches among those vertices for

the induced subgraph of Figure 1 (c). In a uniform random graph, this will be successful with high probability, whereas in the rank-1 inhomogeneous random graphs with connection probabilities (2.10) and (2.11) the algorithm fails with high probability.

Organization of the proofs. We will prove Theorems2.1-2.3in the following sections. First, Section 3 proves Theorem 2.1(ii), by calculating the probability that H appears on a specified subset of vertices, and optimizing that probability. Then, Section 4 proves Theorem2.2with a second moment method. Section5proves Theorem2.1(i), and Section6

introduces and analyzes the randomized algorithm that proves Theorem2.3.

3

Proof of Theorem

2.1

(ii)

3.1

Subgraph probability in the uniform random graph

We first investigate the probability that a given small graph H appears as an induced subgraph of URG(n)

(d) on a specific set of vertices v. We denote the degree of a vertex i inside induced subgraph H by d(H)

i .

Lemma 3.1. Let H be a connected graph on k vertices, and let d be a degree sequence satisfying Assumption 1.1. Furthermore, assume that dvi ≫ 1 or d

(H) i = 1 for all i∈ [k]. Then, Pn URG(n) (d)|v=EH = Y {i,j}∈EH dvidvj Ln+ dvidvj Y {s,t} /∈EH 1 Ln+ dvsdvt (1 + o(1)) (3.1)

Proof. Suppose that G+ is a subset of the edges of H, and Ga subset of the non-edges of

H. Let G− denote the event that the non-edges of Gare not present in URG(n)

(8)

letG+ denote the event that the edges of G+are present in URG(n)

(d)|v. Let d(1)≥ d(2) ≥

· · · ≥ d(n)denote the ordered version of d. Then, by Assumption (i)

d(k)≤ W

n k

1/(τ −1)

, (3.2)

for some constant W > 0. Therefore,

Cn1/(τ −1) X i=1 d(i)≤ Cn1/(τ −1) X i=1 Wn i 1/(τ −1) ≤ W n1/(τ −1) Z Cn1/(τ −1) 0 x1/(1−τ )dx = ˜Cn1/(τ −1)n(τ −2)/(τ −1)2 (3.3)

for some ˜C > 0. Thus, PCn1/(τ −1)

i=1 d(i) = o(n) for τ ∈ (2, 3), while Ln = Θ(n) by (1.5).

Therefore, we may apply [6, Corollary 2] to obtain

P {i, j} ∈ URG(n) (d)| G−,G+ = (di− d (G) i )(dj− d(G)j ) Ln+ (di− d(G)i )(dj− d(G)j ) (1 + o(1)), (3.4)

where d(G)i denotes the degree of vertex i within G+. When di ≫ 1, then di − d(G)i =

di(1 + o(1)) as d(G)i ≤ k − 1. Thus, when di ≫ 1 or d(G)i = 0 and dj ≫ 1 or d(G)j = 0,

then (3.4) becomes P {i, j} ∈ URG(n) (d)| G−,G+ = didj Ln+ didj(1 + o(1)). (3.5) Therefore also P {i, j} /∈ URG(n) (d)| G−,G+ = 1 Ln+ didj (1 + o(1)). (3.6) We now use (3.5) and (3.6) to compute the probability that H appears as an induced sub-graph on vertices v. Let the m edges of H be denoted by e1={i1, j1}, . . . , em={im, jm},

and the k2 − m non-edges of H by ¯e1 = {w1, z1}, . . . , ¯ek(k−1)/2−m={wm, zm}.

Fur-thermore, define G+0 = ∅ and G+s = G+s−1∪ {vis, vjs}. Similarly, define G−0 = ∅ and

G−s = G−s−1∪ {vws, vzs}. Then, Pn URG(n) (d)|v=EH = m Y s=1 P {vi s, vjs} ∈ URG (n) (d)| G+ s−1  × (k 2)−m Y t=1 P {vw s, vzs} /∈ URG (n)(d) | G+ m, G−s−1 . (3.7)

We then use (3.5) and (3.6). This is allowed because dvi ≫ 1 or d (H)

i = 1 for all i∈ [k] so

that when the edge el incident to vertex i is added to in (3.7), then d

(G+

l−1)

(9)

when d(H)

i = 1, then i has no other incident edges in H, and therefore degree zero in G+l−1.

Thus, we obtain Pn URG(n) (d)|v=EH = Y {i,j}∈EH dvidvj Ln+ dvidvj Y {s,t} /∈EH 1 Ln+ dvsdvt (1 + o(1)). (3.8)

3.2

Optimizing the probability of a subgraph

We now study the probability that H is present as an induced subgraph on vertices (v1, . . . , vk)

of specific degrees. Assume that dvi ∈ [ε, 1/ε]n

αi with α

i∈ [0, 1/(τ − 1)] for i ∈ [k], so that

dvi = Θ(n

αi).

Let H be an induced subgraph on k vertices labeled as 1, . . . , k. We now study the probability that URG(n)

(d)|v=EH. When αi+ αj< 1, by Lemma 3.1 P Xv i,vj = 0 = Θ  1 n αi+αj nαi+αj + µn  (1 + o(1)) = 1 + o(1), (3.9) while P Xv i,vj = 1 = Θ  nαi+αj nαi+αj + µn  (1 + o(1)) = Θ(nαi+αj−1). (3.10)

On the other hand, for αi+ αj> 1,

P Xv i,vj = 0 = Θ  1− n αi+αj nαi+αj + µn  (1 + o(1)) = Θ(n1−αi−αj), (3.11) while P Xv i,vj = 1 = Θ  nαi+αj nαi+αj + µn  (1 + o(1)) = 1 + o(1). (3.12) Furthermore, when αi+ αj = 1, P Xvi,vj = 0 = Θ(1) and P Xvi,vj = 1 = Θ(1).

Com-bining this with Lemma 3.1 shows that we can write the probability that H occurs as an induced subgraph on v = (v1,· · · , vk) as P URG(n) (d)|v=EH = Θ  Y {i,j}∈EH: αi+αj<1 nαi+αj−1 Y {u,v} /∈EH: αu+αv>1 n1−αu+αv  . (3.13)

Furthermore, by Assumption1.1the number of vertices with degrees in [ε, 1/ε](µn)αis

Θ(n(1−τ )α+1) for α 1

τ −1. Then, for M (α)

n as in (2.3),

# sets of vertices with degrees in M(α)

n = Θ(nk+(1−τ ) P iαi). (3.14) Thus, N (H, Mn(α)(ε)) = ΘP  nk+(1−τ )Piαi Y {i,j}∈EH:αi+αj<1 nαi+αj−1 Y {u,v} /∈EH:αu+αv>1 n−αu−αv+1. (3.15)

(10)

Maximizing the exponent yields max α (1− τ) X i αi+ X {i,j}∈EH: αi+αj<1 (αi+ αj− 1) − X {u,v} /∈EH: αu+αv>1 (αu+ αv− 1) (3.16)

The following lemma shows that this optimization problem attains its maximum for specific values of the exponents αi:

Lemma 3.2(Maximum contribution to subgraphs). Let H be a connected graph on k ver-tices. If the solution to (3.16) is unique, then the optimal solution satisfies αi ∈ {0,τ −2τ −1,12,

1 τ −1}

for all i. If it is not unique, then there exist at least 2 optimal solutions with αi ∈

{0,τ −2τ −1, 1 2,

1

τ −1} for all i. In any optimal solution αi = 0 if and only if vertex i has degree

one in H.

The proof of this lemma follows a similar structure as the proof of [10, Lemma 4.2], and we therefore defer it to AppendixA. We now use the optimal structure of this optimization problem to prove Theorem2.1(ii):

Proof of Theorem 2.1(ii). Let α be the unique optimizer of (3.16). By Lemma 3.2, the maximal value of (3.16) is attained by partitioning VH\ V1into the sets S1, S2, S3such that

vertices in S1 have αi = τ −2τ −1, vertices in S2 have αi = τ −11 , vertices in S3 have αi = 12

and vertices in V1 have αi = 0. Then, the edges with αi+ αj < 1 are edges inside S1,

edges between S1 and S3 and edges from degree 1 vertices. Furthermore, non-edges with

αi+ αj> 1 are edges inside S2(of which there are 21|S2|(|S2| − 1) − ES2) or edges between

S2and S3(of which there are|S2||S3| − ES2,S3). Recall that the number of edges inside S1

is denoted by ES1, the number of edges between S1 and S3 by ES1,S3 and the number of

edges between V1 and Si by ES1,V1. Then we can rewrite (3.16) as

max P h (1− τ) τ − 2 τ− 1|S1| + 1 τ− 1|S2| + 1 2|S3|  +τ− 3 τ− 1ES1 + τ− 3 2(τ− 1)ES1,S3− ES1,V1 τ− 1 − τ− 2 τ− 1ES2,V1− 1 2ES3,V1 + 1 2|S2|(|S2| − 1) − ES2  τ − 3 τ− 1+ (|S2||S3| − ES2,S3) τ− 3 2(τ− 1) i , (3.17) over all partitions P = (S1, S2, S3) of VH\ V1. Using that|S3| = k − |S1| − |S2| − k1 and

ES3,V1 = k1− ES1,V1− ES2,V1, where k1=|V1| and extracting a factor (3 − τ)/2 shows that

this is equivalent to 1− τ 2 k + maxP (3− τ) 2  |S1| + 1 τ− 1|S2| (2 − τ − k + |S1| + k1) + τ− 2 3− τk1 −2ES1− 2ES2+ ES1,S3− ES2,S3 τ− 1 − ES1,V1− ES2,V1 τ− 1  . (3.18) Since k and k1are fixed and 3− τ > 0, we need to maximize

B(H) = max P h |S1| + 1 τ− 1|S2| (2 − τ − k + |S1| + k1) −2ES1− 2ES2+ ES1,S3− ES2,S3+ ES1,V1− ES2,V1 τ− 1 i , (3.19)

(11)

which equals (2.2).

By (3.15), the maximal value of N (H, Mn(α)(ε)) then scales as

n3−τ2 (k+B(H))+ τ −2

2 k1 = n

3−τ

2 (k2++B(H))+k1/2, (3.20)

which proves Theorem 2.1(ii).

4

Proof of Theorem

2.2

In this section, we will prove Lemma 4.1 that is given below, from which we prove The-orem 2.2. For that, we define the special case of M(α)

n (ε) of (2.3) where αi = 12 for all

i∈ VH= [k] as Wk n(ε) ={(v1, . . . , vk) : dvs ∈ [ε, 1/ε] √µn ∀s ∈ [k]}, (4.1) and let ¯Wk

n(ε) denote the complement of Wnk(ε). We denote the number of subgraphs H

with all vertices in Wk

n(ε) by N (H, Wnk(ε)).

Lemma 4.1 (Major contribution to subgraphs). Let H be a connected graph on k≥ 3 vertices such that (2.2) is uniquely optimized at S3= [k], so that B(H) = 0. Then,

(i) the number of subgraphs with vertices in Wk

n(ε) satisfies N (H, Wk n(ε)) nk2(3−τ ) →(C(τ − 1)) kµ−k 2(τ −1) Z 1/ε ε · · · Z 1/ε ε (x1· · · xk)−τ ×Y {i,j}∈EH xixj 1 + xixj Y {u,v} /∈EH 1 1 + xuxvdx1· · · dxk. (4.2)

(ii) A(H) defined in (2.9) satisfies A(H) <∞. We now prove Theorem2.2using this lemma.

Proof of Theorem 2.2. We first study the expected number of induced subgraphs with ver-tices outside Wk

n(ε) and show that their contribution to the total number of copies of H is

small. First, we investigate the expected number of copies of H in the case where vertex 1 of the subgraph has degree smaller than ε√µn. By Lemma 3.1, the probability that H is present on a specified subset of vertices v = (v1, . . . , vk) satisfies

P URG(n) (d)|v=EH = Θ  Y {i,j}∈EH dvidvj Ln+ dvidvj Y {u,w} /∈EH Ln Ln+ dvudvw  (4.3)

Furthermore, by (1.3), there exists C0 such that P (D = k)≤ C0k−τ for all k, where D

denotes the degree of a uniformly chosen vertex. Let I(H, v) =1URG

(n) (d)|v=EH , so that N (H) =P vI(H, v). Define hn(x1, . . . , xk) = Y {i,j}∈EH xixj µn + xixj Y {s,t} /∈EH µn µn + xsxt . (4.4)

(12)

We can use similar methods as in [5, Eq. (4.4)] to show that for some K∗> 0, X v E[I(H, v)1{dv 1 < ε √µn }] = nkZ ε√µn 1 Z ∞ 1 · · · Z ∞ 1 hn(x1, x2, . . . , xn)dFn(xk) . . . dFn(x1) ≤ nkK∗ Z ε√µn 1 Z ∞ 1 · · · Z ∞ 1 (x2· · · xk)−τhn(x1, x2, . . . , xn)dxk. . . dF (x1). (4.5)

For all non-decreasing g that are bounded on [0, ε√µn] and once differentiable, where ¯

G(x) denotes a function such that Rx

0 G(y)dy = g(x)¯ Z ε√µn 0 g(x)dFn(x) = Z ε√µn 0 Z x 0 ¯ G(y)dydFn(x) = Z ε√µn 0 (Fn(ε√µn)− Fn(y)) ¯G(y)dy = C Z ε√µn 0 y1−τG(y)dy¯ Z ε√µn 0 (ε√µn)1−τG(y)dy¯ ! (1 + o(1)) = C (τ− 1) Z ε√µn 0 y−τg(y)dy +cg(y)y1−τε√µn 0 − (ε √µn)1−τg(εµn) ! (1 + o(1)) = C(τ− 1) Z ε√µn 0 y−τg(y)dy + o((ε√µn)1−τg(ε√µn)), (4.6) where we have used Assumption1.1(ii). Taking

g(x) = gn(x) = Z ∞ 1 · · · Z ∞ 1 (x2· · · xk)−τhn(x, x2, . . . , xn)dx2. . . dxk (4.7) yields for (4.5) X v E[I(H, v)1{dv 1 < ε √µn }] ≤ nkK∗Z ε√µn 1 Z ∞ 1 · · · Z ∞ 1 (x1· · · xk)−τhn(x1, x2, . . . , xn)dx1. . . dxk + o  nk(ε√µn)1−τ Z ∞ 1 · · · Z ∞ 1 (x2· · · xk)−τhn(ε√µn, x2, . . . , xn)dx2. . . dxk  . (4.8) Now we can bound the first term of (4.8) as

nk Z ε√µn 1 Z ∞ 1 · · · Z ∞ 1 (x1· · · xk)−τ Y {i,j}∈EH xixj µn + xixj Y {u,w} /∈EH µn µn + xuxwdx1· · · dxk = nk(µn)k 2(1−τ ) Z ε 0 Z ∞ 0 · · · Z ∞ 0 (t1· · · tk)−τ Y {i,j}∈EH titj 1 + titj Y {u,w} /∈EH 1 1 + tutw dt1· · · dtk = Onk2(3−τ )  h1(ε), (4.9)

(13)

where h1(ε) is a function of ε. By Lemma4.1(ii), h1(ε)→ 0 as ε ց 0.

For the second term in (4.8), we obtain onk(ε√µn)1−τ Z ∞ 1 · · · Z ∞ 1 (x2· · · xk)−τgn(ε√µn, x2, . . . , xn)dx2. . . dxk  = onk(µn)k2(1−τ )ε1−τ Z ∞ 0 · · · Z ∞ 0 (t2· · · tk)−τh(ε, t2, . . . , tn)dt2. . . dtk  = onk2(3−τ )  h2(ε), (4.10) where h(t1, . . . , tk) = Y {i,j}∈EH titj 1 + titj Y {u,w} /∈EH 1 1 + tutw , (4.11) and h2(ε) is a function of ε.

We can bound the situation where another vertex has degree smaller than ε√n, or where one of the vertices has degree larger than√n/ε, similarly. This yields

EN (H, ¯Wk n(ε)) = O  nk2(3−τ )  h(ε) + onk2(3−τ )˜h(ε), (4.12)

for some function h(ε) not depending on n such that h(ε) → 0 when ε ց 0 and some function ˜h(ε) not depending on n. By the Markov inequality,

N (H, ¯Wnk(ε)) = h(ε)OP

 nk2(3−τ )



. (4.13)

Thus, for any δ > 0,

lim sup ε→0 lim sup n→∞ P N (H, ¯W k n(ε)) nk(3−τ )/2 > δ  = 0. (4.14)

Combining this with Lemma 4.1(i)gives N (H) nk2(3−τ ) P −→ckµ−k2(τ −1) Z ∞ 0 · · · Z ∞ 0 (x1,· · · xk)−τ Y {i,j}∈EH xixj 1 + xixj ×Y {u,w} /∈EH 1 1 + xuxwdx1· · · dxk. (4.15)

4.1

Conditional expectation

We will prove Lemma 4.1using a second moment method. Thus, we will first investigate the expected number of copies of induced subgraph H in URG(n)

(d), and then bound its variance. Let H be a subgraph on k vertices, labeled as [k], and m edges, denoted by e1={i1, j1}, . . . , em={im, jm}.

Lemma 4.2 (Convergence of conditional expectation of √n subgraphs). Let H be a sub-graph such that (2.2) has a unique maximizer, and the maximum is attained at 0. Then,

EN (H, Wk n(ε))  nk2(3−τ ) → (C(τ − 1)) kµ−k 2(τ −1) Z 1/ε ε · · · Z 1/ε ε (x1· · · xk)−τ

(14)

×Y {i,j}∈EH xixj 1 + xixj Y {u,v} /∈EH 1 1 + xuxv dx1· · · dxk. (4.16) Proof. We denote h(d1, . . . , dk) = Y {i,j}∈EH didj Ln+ didj Y {u,v} /∈EH 1 Ln+ dudv . (4.17) As EN (H, Wk n(ε)) = X (v1,...,vk)∈Wnk(ε) P URG(n) (d)|v=EH , (4.18) and dvi ≥ ε √n for i

∈ [k], we get from Lemma 3.1

EN (H, Wk n(ε)) = X (v1,...,vk)∈Wnk(ε) Y {i,j}∈EH dvidvj Ln+ dvidvj Y {s,t} /∈EH 1 Ln+ dvsdvt (1 + o(1)) = (1 + o(1)) X

1≤i1<i2<···<ik≤n

h(di1, . . . , dik)1i1, i2, . . . , ik ∈ W

k

n(ε) . (4.19)

We then define the measure

M(n)([a, b]) = µ(τ −1)/2n(τ −3)/2X i∈[n] 1{di ∈ [a, b] √µn }. (4.20) By [5, Eq. (4.19)] M(n) ([a, b])→ C(τ − 1) Z b a t−τdt =: λ([a, b]). (4.21) Then, P

1≤i1<i2<···<ik≤nh(di1, . . . , dik)1i1, i2, . . . , ik ∈ W

k n(ε) nk2(3−τ )µ−k2(τ −1) = 1 k! Z 1/ε ε · · · Z 1/ε ε h(t1, . . . , tk)dM(n)(t1) . . . dM(n)(tk). (4.22)

Because the function h(t1, . . . , tk) is a bounded, continuous function on [ε, 1/ε]k,

P

1≤i1<i2<···<ik≤nh(d1, . . . , dk)

1i1, i2, . . . , ik ∈ W k n(ε) nk2(3−τ )µ−k2(τ −1) →k!1 Z 1/ε ε · · · Z 1/ε ε h(t1, . . . , tk)dλ(t1) . . . dλ(tk) =(C(τ− 1)) 3 k! Z 1/ε ε · · · Z 1/ε ε (x1· · · xk)−τ Y {i,j}∈EH xixj 1 + xixj Y {u,v} /∈EH 1 1 + xuxv dx1. . . dxk. (4.23)

(15)

4.2

Variance of the number of induced subgraphs

We now study the variance of the number of induced subgraphs. The following lemma shows that the variance of the number of subgraphs is small compared to its expectation: Lemma 4.3 (Conditional variance for subgraphs). Let H be a subgraph such that (2.2) has a unique maximum attained at 0. Then,

Var N (H, Wk n(ε))  E[N (H, Wk n(ε))] 2 → 0. (4.24) Proof. By Lemma4.2, EN (H, Wk n(ε)) 2 = Θ(n(3−τ )k), (4.25) Thus, we need to prove that the variance is small compared to n(3−τ )k. Denote v =

(v1, . . . , vk) and u = (u1, . . . , uk) and, for ease of notation, we denote G = URG(n)(d). We

write the variance as

Var N (H, Wnk(ε)) = X v∈Wk n(ε) X u∈Wk n(ε)  P(G|v=EH, G|u=EH) − P (G|v =EH) Pn(G|u =EH)  . (4.26)

This splits into various cases, depending on the overlap of v and u. When v and u do not overlap, X v∈Wk n(ε) X u∈Wk n(ε) P(G|v=EH, G|u =EH)− P (G|v=EH) P (G|u=EH) = X v∈Wk n(ε) X u∈Wk n(ε) P(G|v=EH) P (G|u=EH) (1 + o(1)) − P (G|v=EH) P (G|u=EH)  = EN (H, Wk n(ε)) 2 o(1),

by Lemma 3.1. The other contributions are when v and u overlap. In this situation, we bound the probability that induced subgraph H is present on a specified set of vertices by 1. When v and u overlap on s≥ 1 vertices, we bound the contribution to (4.26) as

X v,u∈Wk n(ε) : |v∪u|=2k−s P(G|v=EH, G|u=EH)≤ |{i: diµn[ε, 1/ε]}|2k−s = On(3−τ )(2k−s)2  , (4.27)

by Assumption(i). This is o(n(3−τ )k) for τ ∈ (2, 3), as required.

Proof of Lemma 4.1. We start by proving part (i). By Lemma 4.3 and Chebyshev’s in-equality,

N (H, Wnk(ε)) = EN (H, Wnk(ε) (1 + oP(1)). (4.28)

Combining this with Lemma 4.2 proves Lemma 4.1(i). Lemma 4.1(ii) is follows from Lemma 5.1in the next section, when|S

(16)

5

Major contribution to general subgraphs: proof of

Theorem

2.1

(i)

We first introduce some further notation. As before, we denote the degree of a vertex i inside its subgraph H by d(H)

i . Furthermore, for any W ⊆ VH, we denote by d(H)i,W the

number of edges from vertex i to vertices in W . Let H be a connected subgraph, such that the optimum of (2.2) is unique, and letP = (S

1, S2∗, S3∗) be the optimal partition. Define

ζi=            1 if d(H) i = 1, d(H) i,S∗ 1 + d (H) i,S∗ 3 + d (H) i,V1 if i∈ S ∗ 1, d(H) i,V1+ d (H) i,S∗ 1 + d (H) i,S∗ 2 − |S ∗ 2| − |S3∗| + 1 if i ∈ S2∗, d(H) i,S∗ 1 + d (H) i,V1+ d (H) i,S∗ 2 − |S ∗ 2| if i∈ S3∗. (5.1)

We now provide two lemmas that show that two integrals related to the solution of the optimization problem (2.2) are finite. These integrals are the key ingredient in proving Theorem 2.1(i).

Lemma 5.1 (Induced subgraph integrals over S∗

3). Suppose that the maximum in (3.16) is

uniquely attained by P = (S∗

1, S∗2, S3∗) with|S3∗| = s > 0, and say that S3∗= [s]. Then

Z ∞ 0 · · · Z ∞ 0 Y i∈[s] x−τ +ζi i Y {i,j}∈ES∗3 xixj 1 + xixj Y {u,w} /∈ES∗3 1 1 + xuxwdxs· · · dx1<∞. (5.2)

Lemma 5.2 (Induced subgraph integrals over S∗

1 ∪ S2∗). Suppose the optimal solution

to (3.16) is unique, and attained by P = (S

1, S2∗, S3∗). Say that S2∗ = [t2] and S1∗ =

[t2+ t1]\ [t2]. Then, for every a > 0,

Z a 0 · · · Z a 0 Z ∞ 0 · · · Z ∞ 0 Y j∈[t1+t2] x−τ +ζj j Y {i,j}∈ES∗1 ,S∗ 2 xixj 1 + xixj ×Y {i,j} /∈ES∗1 ,S∗ 2 1 1 + xixjdxt1+t2· · · dx1<∞. (5.3)

The proofs of Lemma 5.1and5.2are similar to the proofs of [10, Lemmas 7.2 and 7.3] and are therefore deferred to Appendix B.

Proof of Theorem 2.1(i). Note that dmax≤ Mn1/(τ −1)) by Assumption(i). Define

γui(n) = ( M n1/(τ −1) if i∈ S∗ 2, nαi n else, (5.4)

with αi as in (2.4), and denote

γl i(n) = ( 1 if i∈ V1, εnnαi else. (5.5)

We then show that the expected number of subgraphs where the degree of at least one vertex i satisfies di ∈ [γ/ il(n), γiu(n)] is small, similarly to the proof of Theorem2.2in Section4.

(17)

We first study the expected number of copies of H where the first vertex has degree dv1∈

[1, γl

1(n)) and all other vertices satisfy dvi ∈ [γ

l

i(n), γiu(n)], by integrating the probability

that induced subgraph H is formed over the range where vertex v1has degree dv1 ∈ [1, γ

l 1(n))

and all other vertices satisfy dvi ∈ [γ

l

i(n), γiu(n)]. Using Lemma 3.1, and that the degree

distribution can be bounded as P (D = k)≤ M2k−τ for some M2 > 0 by Assumption (i),

we bound the expected number of such copies of H by X v EI(H, v)1dv 1 < γ l 1(n), dvi ∈ [γ l i(n), γiu(n)]∀i > 1  ≤ KnkZ γl 1(n) 1 Z γ2u(n) γl 2(n) · · · Z γku(n) γl k(n) (x1· · · xk)−τ Y {i,j}∈EH xixj Ln+ xixj Y {u,w} /∈EH Ln Ln+ xuxw dxk· · · dx1, (5.6) for some K > 0, and where we recall that I(H, v) = 1URG

(n)

(d)|v=EH . This integral

equals zero when vertex 1 is in V1, since then [1, γ1l(n)) = ∅. Suppose that vertex 1 is in

S∗

2. W.l.o.g. assume that S2∗ = [t2], S1∗ = [t1+ t2]\ [t2] and S3∗ = [t1+ t2+ t3]\ [t1+ t2].

We bound xixj/(Ln+ xixj) by

(a) xixj/Ln for i, j∈ S1∗;

(b) xixj/Ln for i or j in V1;

(c) xixj/Ln for i∈ S1∗, j∈ S3∗or vice versa; and

(d) 1 for i, j∈ S

2 and i∈ S2∗, j∈ S3∗ or vice versa.

Similarly, we bound Ln/(Ln+ xixj) by

(a) 1 for i, j∈ S∗ 1;

(b) 1 for i or j in V1;

(c) 1 for i∈ S∗

1, j∈ S3∗ or vice versa; and

(d) Ln/(xixj) for i, j∈ S2∗and i∈ S2∗, j∈ S3∗ or vice versa.

Combining these bounds with the change of variables yi = xi/nαi yields for (5.6), for

some ˜K > 0, in the bound

X v EI(H, v)1dv 1< γ l 1(n), dvi ∈ [γ l i(n), γui(n)]∀i > 1  ≤ ˜Knkn|S1∗|(2−τ )+|S3∗|(1−τ )/2−|S2∗|n τ −3 τ −1ES∗1+2(τ −1)τ −3 ES∗1 ,S∗ 3− 1 τ −1ES∗1 ,V1−12ES∗3 ,V1−τ −2τ −1ES∗2 ,V1 × n(12|S2|(|S2|−1)−ES2)τ −3τ −1+(|S2||S3|−ES2,S3)2(τ −1)τ −3 × Z εn 0 Z M 0 · · · Z M 0 Z ∞ 0 · · · Z ∞ 0 Y i∈VH\V1 y−τ +ζi i Y {i,j}∈ES∗3∪ES∗1 ,S2∗ yiyj yiyj+ 1 × Y

{u,w} /∈ES∗3∪ES∗1 ,S∗ 2 1 yuyw+ 1 dyt1+t2+t3· · · dy1 Y j∈V1 Z ∞ 1 y1−τj dyj, (5.7)

(18)

where the integrals from 0 to M correspond to vertices in S∗

2 and the integrals from 0 to∞

to vertices in S∗

1 and S3∗. Since τ ∈ (2, 3), the integrals corresponding to vertices in V1 are

finite. By the analysis from (3.17) to (3.20), |S∗ 1|(2 − τ) + |S3∗|(1 − τ)/2 − |S2∗| + k + τ− 3 τ− 1ES∗1 + τ− 3 2(τ− 1)ES∗1,S∗3 −τ 1 − 1ES∗1,V1− 1 2ES∗3,V1− τ− 2 τ− 1ES∗2,V1 + 1 2|S2|(|S2| − 1) − ES2  τ − 3 τ− 1+ (|S2||S3| − ES2,S3) τ− 3 2(τ− 1) = 3− τ 2 (k2++ B(H)) + k1/2. (5.8)

The integrals over yi∈ VH\ V1 can be split into

Z εn 0 Z M 0 · · · Z M 0 Z ∞ 0 · · · Z ∞ 0 Y i∈S∗ 1∪S∗2 y−τ +ζi i Y {i,j}∈ES∗1 ,S2 yiyj yiyj+ 1 Y {u,w} /∈ES∗1 ,S2 1 yuyw+ 1 dyt1+t2· · · dy1 × Z ∞ 0 · · · Z ∞ 0 Y i∈S∗ 3 y−τ +ζi i Y {i,j}∈ES∗3 yiyj yiyj+ 1 Y {u,w} /∈ES∗3 1 yuyw+ 1 dyt1+t2+t3· · · dyt1+t2+1. (5.9) By Lemma 5.1the set of integrals on the second line of (5.9) is finite. Lemma 5.2 shows that the set of integrals on the first line of (5.9) tends to zero for εn → 0. Thus,

Z εn 0 Z M 0 · · · Z M 0 Z ∞ 0 · · · Z ∞ 0 Y i∈S∗ 1∪S2∗ y−τ +ζi i Y {i,j}∈ES∗1 ,S2∗ yiyj yiyj+ 1 Y {u,w} /∈ES∗1 ,S2 1 yuyw+ 1 dyt1+t2· · · dy1 = o(1). (5.10)

Therefore, (5.7),(5.8) and (5.10) yield X v EI(H, v)1dv 1 < γ l 1(n), dvi ∈ [γ l i(n), γiu(n)]∀i > 1  = on3−τ2 (k2++B(H))+k1/2, (5.11) when vertex 1 is in S∗

2. Similarly, we can show that the expected contribution from dv1 <

γl

1(n) satisfies the same bound when vertex 1 is in S1∗ or S3∗. The expected number of

subgraphs where dv1 > γ

u

1(n) if vertex 1 is in S1∗, S3∗ or V1 can be bounded similarly, as

well as the expected contribution where multiple vertices have dvi ∈ [γ/

l i(n), γiu(n)]. Denote Γn(εn) ={(v1, . . . , vk) : dvi ∈ [γ l vi(n), γ u vi(n)]}, (5.12)

and define ¯Γn(εn) as its complement. Denote the number of subgraphs with vertices in

¯

Γn(εn) by N (H, ¯Γn(εn)). Since dmax≤ Mn1/(τ −1), Γn(εn) = Mn(α). Therefore,

NH, ¯Mn(α)(εn)



= NH, ¯Γn(εn)



(19)

Figure 2: The subgraph H that is used in Algorithm 1. Algorithm 1 attempts to find a copy of H where the dark vertices are in V′, and the light vertices in W.

where NH, ¯Mn(α))(εn) denotes the number of copies of H on vertices not in Mn(α)(εn).

By the Markov inequality and (5.11), N (H, ¯Mn(α)(ε)) = N



H, ¯Γn(εn)



= on3−τ2 (k2++B(H))+k1/2. (5.14)

Combining this with Theorem2.1(ii), for fixed ε > 0, N (H) = N (H, M(α) n (ε)) + N (H, ¯Mn(α)(ε)) = O(n 3−τ 2 (k2++B(H))+k1/2) (5.15) shows that NH, Mn(α)(εn)/N (H)−→ 1,P (5.16)

as required. This completes the proof of Theorem2.1(i).

6

Proof of Theorem

2.3

Input :G = (V, E).

Output:Location of H in G or fail.

Define n =|V |, εn= 1/ log(n), In= [n1/(τ −1)εn, n1/(τ −1)/εn]

Jn= [n(τ −2)/(τ −1)εn, n(τ −2)/(τ −1)/εn] and set V′=∅ and W′=∅.

fori∈ V do

if di∈ In then V′ = V′∪ i;

if di∈ Jn then W′ = W′∪ i;

end

Divide the vertices in V′ randomly into⌊|V| /2⌋ pairs S

1, . . . , S⌊|V|/k⌋.

Divide the vertices in W′ randomly into⌊|W| /4⌋ sets of size 4, T

1, . . . , T⌊|V′|/k⌋.

Set k = 0

forj = 1, . . . ,⌊|V| /2⌋ do

fori = 1, . . . ,|W| /4⌋ do

k = k + 1.

if H is an induced subgraph on Sj∪ Ti then return location of H;

if k = n then return fail.; end

end return fail

(20)

Proof of Theorem 2.3. Algorithm1shows the algorithm that distinguishes uniform random graphs from rank-1 inhomogeneous random graphs with connection probabilities (2.11) or (2.10). It first selects only vertices of degrees proportional to n1/(τ −1) and n(τ −2)/(τ −1),

and then randomly selects such vertices and checks whether they form a copy of induced subgraph H of Figure 2. We will show that with high probability, Algorithm 1 finds a copy of H when the input graph is generated by a uniform random graph, and that with high probability, Algorithm 1outputs ‘fail’ when the input graph is a rank-inhomogeneous random graph with connection probability (2.11) or (2.10).

We first focus on the performance of Algorithm 1 when the input G is a uniform ran-dom graph. Algorithm 1 detects copies of subgraph H where the vertices have degrees as illustrated in Figure1(c): two vertices of degree proportional to n1/(τ −1)and four of degree

proportional to n(τ −2)/(τ −1). By Theorem 2.1(ii), there are at least cn(4− 1

τ −1)(3−τ ) such

induced subgraphs for some c with high probability. Furthermore, denote

α= [n1/(τ −1), n1/(τ −1), n(τ −2)/(τ −1), n(τ −2)/(τ −1), n(τ −2)/(τ −1), n(τ −2)/(τ −1)]. (6.1) By Assumption 1.1,

|M(n)(α)

| = Θ(n4(3−τ )log(n)6(τ −1)), (6.2) so that there are at most c2n4(3−τ )log(n)6(τ −1) sets of vertices with degrees in M(n)(α)

that form no copy of induced subgraph H for some c2 <∞. Thus, the probability that a

randomly chosen set of vertices with degrees in M(n)(α) forms H is at least

cn(4−τ −11 )(3−τ )

c2n4(3−τ )log(n)6(τ −1)

= c c2n

(τ −3)/(τ −1)log(n)6(1−τ ). (6.3)

Algorithm 1 tries at most n such sets of vertices with degrees in M(n)(α), and therefore

attempts to find subgraph H in Θ(min(n, n4(3−τ )log(n)6(τ −1))) = Θ(f (n)) attempts where

f (n) = min(n, n4(3−τ )log(n)6(τ −1)). Thus, the probability that the algorithm does not find

a copy of induced subgraph H among all attempts is bounded by P(Algorithm does not find H)1− nτ −1τ −3

f (n)

≤ e−nγ, (6.4) for some γ > 0, where we have used that 1− x ≤ e−x

We now analyze the performance of Algorithm 1 on rank-1 inhomogeneous random graphs with connection probability (2.11). As these have the same degree distribution asymptotically, (6.2) also holds there. Furthermore, the probability that vertices in M(n)(α)

together form a copy of H is Y {i,j}∈EH p(i, j) Y {i,j} /∈EH (1− p(i, j)) ≤ e−nτ −11 nτ −11 / log(n)2 ≤ e−nγ2 (6.5)

for some γ2> 0, where we bounded all p(i, j) and 1−p(i, j) by 1, except for 1−p(i, j) for the

non-edge between the two vertices of degree at least nτ −11 / log(n) (vertices in the left and

right bottom corner of Figure 2). Thus, there are at most Θ(n4(3−τ )e−n3−ττ −1

log(n)6(τ −1))

copies of induced subgraph H on sets of vertices in M(n)(α). Therefore, the probability

that a randomly chosen set of vertices with degrees in M(n)(α) forms H is at most

c3n 4(3−τ )e−n3−ττ −1 log(n)6(τ −1) n4(3−τ )log(n)6(τ −1) = c3e−n 3−τ τ −1 (6.6)

(21)

Then, the probability that the algorithm does not find a copy of induced subgraph H among all n attempts is bounded by

P(Algorithm does not find H)  1− c3e−n 3−τ τ −1 f (n) = 1− c3f (n)e−n 3−τ τ −1 + O  f (n)2e−2n 3−τ τ −1  . (6.7) Thus, with high probability the algorithm outputs ‘fail’ when the input graph is a rank-1 inhomogeneous random graph with connection probability (2.11). A similar calculation shows that the algorithm outputs ‘fail’ with high probability when the input graph G is a rank-1 inhomogeneous random graph with connection probabilities (2.10).

References

[1] M. Bogu˜n´a and R. Pastor-Satorras. Class of correlated random networks with hidden variables. Phys. Rev. E, 68:036112, 2003.

[2] B. Bollob´as. A probabilistic proof of an asymptotic formula for the number of labelled regular graphs. European J. Combin., 1(4):311–316, 1980.

[3] T. Britton, M. Deijfen, and A. Martin-L¨of. Generating simple random graphs with prescribed degree distribution. J. Stat. Phys., 124(6):1377–1397, 2006.

[4] F. Chung and L. Lu. The average distances in random graphs with given expected degrees. Proc. Natl. Acad. Sci. USA, 99(25):15879–15882, 2002.

[5] J. Gao, R. van der Hofstad, A. Southll, and C. Stegehuis. Counting triangles in power-law uniform random graphs. 2018.

[6] P. Gao and Y. Ohapkin. Subgraph probability of random graphs with specified degrees and applications to chromatic number and connectivity.

[7] A. Garavaglia and C. Stegehuis. Subgraphs in preferential attachment models. Ad-vances of Applied Probability, 51(3):898–926, 2019.

[8] H. Garmo. The asymptotic distribution of long cycles in random regular graphs. Ran-dom Structures and Algorithms, 15(1):43–92, aug 1999.

[9] R. van der Hofstad, A. J. E. M. Janssen, J. S. H. van Leeuwaarden, and C. Stege-huis. Local clustering in scale-free networks with hidden variables. Phys. Rev. E, 95(2):022307, 2017.

[10] R. van der Hofstad, J. S. H. van Leeuwaarden, and C. Stegehuis. Optimal subgraph structures in scale-free configuration models. 2017.

[11] S. Janson. The probability that a random multigraph is simple. Combin. Probab. Comput., 18(1-2):205, 2009.

[12] S. Janson, T. Luczak, and I. Norros. Large cliques in a power-law random graph. J. Appl. Probab., 47(04):1124–1135, 2010.

(22)

[13] R. M. Karp. Reducibility among combinatorial problems. In Complexity of computer computations, pages 85–103. Springer, 1972.

[14] M. Molloy and B. Reed. A critical point for random graphs with a given degree sequence. Random Structures & Algorithms, 6(2-3):161–180, 1995.

[15] C. Stegehuis, R. van der Hofstad, and J. S. H. van Leeuwaarden. Variational principle for scale-free network motifs. Scientific Reports, 9(1):6762, 2019.

[16] N. C. Wormald. The asymptotic distribution of short cycles in random regular graphs. Journal of Combinatorial Theory, Series B, 31(2):168–182, 1981.

A

Proof of Lemma

3.2

Proof of Lemma 3.2. By defining βi= αi−12 and

aij(βi, βj) =      1 βi+ βj< 0,{i, j} ∈ EH, −1 βi+ βj> 0,{i, j} /∈ EH, 0 else, (A.1) we can rewrite (3.16) as max β 1− τ 2 k + X i βi(1− τ + X j6=i aij(βi, βj)), (A.2)

over all possible values of βi ∈ [−12,2(τ −1)3−τ ]. We ignore the constant factor of (1− τ)k2

in (A.2), since it does not influence the optimal β values. Then, we have to prove that βi∈ {−12,2(τ −1)τ −3 , 0,2(τ −1)3−τ } for all i in the optimal solution. Note that (A.2) is a piecewise

linear function in β1, . . . , βk. Therefore, if (A.2) has a unique maximum, then it must be

attained at the boundary for βi or at a border of one of the linear sections. Thus, any

unique optimal value of βi satisfies βi=−12, βi= 2(τ −1)τ −3 or βi+ βj = 0 for some j.

The proof of the lemma then consists of three steps:

Step 1. Show that βi=−12 if and only if vertex i has degree 1 in H in any optimal solution.

Step 2. Show that any unique solution does not contain i with i| ∈ (0,2(τ −1)3−τ ).

Step 3. Show that any optimal solution that is not unique can be transformed into two different optimal solutions with βi∈ {−12,2(τ −1)τ −3 , 0,2(τ −1)3−τ } for all i.

Step 1. Let i be a vertex of degree 1 in H, and j be the neighbor of i. The contribution from vertex i to (A.2) is

βi(1− τ +1{βi<−βj} −

X

s6=i,j

1{βi>−βs}). (A.3)

This contribution is maximized when choosing βi=−12 as τ ∈ (2, 3). Thus, βi=−12 in the

optimal solution if the degree of vertex i is one.

Let i be a vertex in VH, and recall that d(H)i denotes the degree of i in H. Let i be such

that d(H)

(23)

j 6= i is 3−τ

2(τ −1). This implies that βi+ βj < 0 for all j. Thus, the contribution to the ith

term of (A.2) is

−1

2(1− τ + d

(H)

i ) < 0, (A.4)

for any βj, j6= i. Increasing βito 2(τ −1)τ −3 then gives a higher contribution. Thus, βi≥2(τ −1)τ −3

when d(H)

i ≥ 2.

Step 2. Now we show that when the solution to (A.2) is unique, it is never optimal to have|β| ∈ (0, 3−τ 2(τ −1)). Let ˜ β = min i:|βi|>0|β i| . (A.5)

Let Nβ˜−denote the number of vertices with their ˜β value equal to− ˜β, and Nβ˜+the number

of vertices with value ˜β, where Nβ˜++ Nβ˜− ≥ 1. Furthermore, let E

+ ˜

β− denote the number

of edges from vertices with value − ˜β to other vertices j such that βj < ˜β, and Eβ+˜+ the

number of edges from vertices with value ˜β to other vertices j such that βj <− ˜β. Similarly,

let Eβ−˜− denote the number of non-edges from vertices with value − ˜β to other vertices j

such that βj > ˜β, and Eβ−˜+ the number of non-edges from vertices with value ˜β to other

vertices j such that βj <− ˜β. Then, the contribution from these vertices to (A.2) is

˜ β (1− τ)(Nβ˜+− Nβ˜−) + E + ˜ β+− E + ˜ β−− E − ˜ β++ E − ˜ β−. (A.6)

Because we assume β to be optimal, and the optimum to be unique, the value inside the brackets cannot equal zero. The contribution is linear in ˜β and it is the optimal contribution, and therefore ˜β ∈ {0,2(τ −1)3−τ }. This shows that βi ∈ {2(τ −1)τ −3 , 0,2(τ −1)3−τ } for all i such that

d(H)

i ≥ 2.

Step 3. Suppose that the solution to (A.2) is not unique. Suppose that β∗ appears in

one of the optimizers of (A.2). In the same notation as in (A.6), the contribution from vertices with β-values β∗ and−β∗ equals

βh(1− τ) Nβ+ ∗ − Nβ−∗ + E + β+∗ − E + β− ∗ − E − β+∗ + E− β− ∗ i . (A.7)

Since this contribution is linear in β∗, the contribution of these vertices can only be

non-unique if the term within the square brackets equals zero. Thus, for the solution to (A.2) to be non-unique, there must exist ˆβ1, . . . , ˆβs> 0 for some s≥ 1 such that

ˆ βj  (1− τ) Nβˆ+ j − Nβˆ−j  + E + β+ ∗ − E + β− ∗ − E − β+ ∗ + Eβ−− ∗  = 0 ∀j ∈ [s]. (A.8) Setting all ˆβj = 0 and setting all ˆβj = 2(τ −1)3−τ are both optimal solutions. Thus, if the

solution to (A.2) is not unique, at least 2 solutions exist with βi∈ {2(τ −1)τ −3 , 0,2(τ −1)3−τ } for all

i∈ VH.

B

Proof of Lemmas

5.1

and

5.2

We first provide a lemma that states several properties of the variable ζi of (5.1), that will

(24)

Lemma B.1(Bounds on ζi). Let H be a connected subgraph, such that the optimum of (2.2)

is unique, and let P = (S∗

1, S2∗, S3∗) be the optimal partition. Then

(i) ζi+ d(H)i,S∗ 2 − |S ∗ 2| ≤ 1 for i ∈ S1∗; (ii) d(H) i,S∗ 3 + ζi≥ 1 for i ∈ S ∗ 2; (iii) ζi+ d(H)i,S∗ 3 − |S ∗ 3| ≤ 0 and d (H) i,S∗ 3 + ζi≥ 2 for i ∈ S ∗ 3.

Proof. Suppose first that i ∈ S

1. Now consider the partition ˆS1 = S1∗\ {i}, ˆS2 = S2∗,

S3 = S3∗∪ {i}. Then, ESˆ1 = ES∗1 − d (H) i,S∗ 1, ESˆ1, ˆS3 = ES1∗,S3∗ + d (H) i,S∗ 1 − d (H) i,S∗ 3 and ESˆ2, ˆS3 = ES∗ 2,S∗3 + d (H) i,S∗ 2. Furthermore, ESˆ1,V1 = ES1∗,V1 − d (H)

i,V1 and ESˆ2,V1 = ES2∗,V1. Because the

partition into S∗

1, S2∗ and S3∗achieves the unique optimum of (2.2),

|S∗ 1| + |S2∗| (2− τ − k + |S∗ 1| + k1) τ− 1 − 2ES∗ 1 − 2ES2∗+ ES∗1,S3∗− ES2∗,S3∗+ ES1∗,V1− ES∗2,V1 τ− 1 >|S∗ 1| − 1 + |S2∗| (1− τ − k + |S∗ 1| + k1) τ− 1 −2ES1∗− 2ES∗2 + ES1∗,S∗3− ES∗2,S∗3− d (H) i,S∗ 2 − d (H) i,S∗ 1 − d (H) i,S∗ 3 + ES∗1,V1− ES2∗,V1− d (H) i,V1 τ− 1 , (B.1) which reduces to d(H) i,S∗ 1 + d (H) i,S∗ 3 + d (H) i,V1− d (H) i,S∗ 2 − |S ∗ 2| = ζi− d(H)i,S∗ 2− |S ∗ 2| < τ − 1. (B.2)

Using that τ ∈ (2, 3) then yields d(H)

i,S∗ 1 + d (H) i,S∗ 3 + d (H) i,V1≤ 1.

Similar arguments give the other inequalities. For example, for i ∈ S

3, considering

the partition where i is moved to S∗

1 gives the inequality d

(H) i,S∗ 3 + d (H) i,S∗ 1 + d (H) i,V1 ≥ 2, and

considering the partition where i is moved to S∗

2 results in the inequality d

(H) i,S∗ 1 + d (H) i,V1 ≤ 1, so that ζi≤ 1.

Proof of Lemma 5.1. Recall that S∗

3 = [s]. First of all, Z ∞ 0 · · · Z ∞ 0 Y i∈[s] x−τ +ζi i Y {i,j}∈ES∗3 xixj 1 + xixj Y {i,j} /∈ES∗3 1 1 + xixj dxs· · · dx1 ≤ Z ∞ 0 · · · Z ∞ 0 Y i∈[s] x−τ +ζi i Y {i,j}∈ES∗3 min(xixj, 1) Y {i,j} /∈ES∗3 min(1/(xixj), 1)dxs· · · dx1. (B.3)

We compute the contribution to (B.3) where the integrand runs from 1 to∞ for vertices in some nonempty set U , and from 0 to 1 for vertices in ¯U = S∗

3\ U. W.l.o.g., assume that

U = [t] for some 1≤ t < s and that x1< x2<· · · < xt. Define, for i∈ ¯U ,

˜ h(i, x) = Z 1 0 x−τ +ζi+d (H) i, ¯U i Y j∈[t] : {i,j}∈ES∗3 min(xixj, 1) Y j∈[t] : {i,j} /∈ES∗3 min(1/(xixj), 1)dxi. (B.4)

(25)

Then (5.2) can be bounded by Z ∞ 1 · · · Z ∞ 1 Y p∈[t] x−τ +ζp p Y

u,v∈U : {u,v} /∈ES∗3

1 xuxv k Y i=t+1 ˜ h(i, x)dxt· · · dx1 = Z ∞ 1 · · · Z ∞ 1 Y p∈[t] x−τ +ζp−(|U|−1−d (H) p,U) p k Y i=t+1 ˜ h(i, x)dxt· · · dx1 = Z ∞ 1 · · · Z ∞ 1 Y p∈[t] x−τ +ζp−t+1+d (H) p,[t] p k Y i=t+1 ˜ h(i, x)dxt· · · dx1.

We can write ˜h(i, x) as ˜ h(i, x) = Z 1/xt 0 x−τ +ζi+d (H) i,S∗3 i dxi· t Y j=1 x 1 n {i,j}∈ES∗3 o j + Z 1/xt−1 1/xt x−τ −1+ζi+d (H) i,S∗3 i dxi· t−1 Y j=1 x 1 n {i,j}∈ES∗3 o j t Y k=t x− 1 n {i,k} /∈ES∗3 o k + + Z 1/xt−2 1/xt−1 x−τ +ζi+d (H) i,S∗3−2 i dxi t−2 Y j=1 x 1 n {i,j}∈ES∗3 o j t Y k=t−1 x− 1 n {i,k} /∈ES∗3 o k +· · · + Z 1 1/x1 x−τ +ζi+d (H) i,S∗3−t i dxi· t Y k=1 x− 1 n {i,k} /∈ES∗3 o k . (B.5) By LemmaB.1, ζi+ d(H)i,S∗ 3 ≥ 2 for i ∈ S ∗

3 so that the first integral is finite. Computing these

integrals yields h(i, x) = C0 t Y k=1 x− 1 n {i,k} /∈ES∗3 o k + C1x τ −ζi−d(H)i,S∗ 3+t−2 1 1 Y j=1 x1 n {i,j}∈ES∗3 o j t Y k=2 x−1 n {i,k} /∈ES∗3 o k + C2x τ −ζi−d(H)i,S∗ 3+t−3 2 2 Y j=1 x1 n {i,j}∈ES∗3 o j t Y k=3 x−1 n {i,k} /∈ES∗3 o k +· · · + Ctx τ −ζi−d(H)i,S∗ 3−1 t t Y j=1 x 1 n {i,j}∈ES∗3 o j

=: C0h0(i, x) + C1h1(i, x) +· · · + Ctht(i, x), (B.6)

for some constants C0, . . . , Ct. Assume that i is connected to l vertices in U , so that there are

l vertices in{1, 2. . . . , t} such that1{i, t} ∈ ES∗

3 = 1 and t − l such that1{i, t} /∈ ES∗

3 = 1. Then, hp+1(i, x) hp(i, x) = Qp+1 j=1x 1 n {i,j}∈ES∗3 o j x τ −ζi−d(H)i,S∗ 3+t−1−(p+1) p+1 Qt k=p+2x −1 n {i,k} /∈ES∗3 o k Qp j=1x 1 n {i,j}∈ES∗3 o j x τ −ζi−d(H)i,S∗ 3+t−1−p p Qtk=p+1x −1 n {i,k} /∈ES∗3 o k

(26)

=x 1 n {i,p+1}∈ES∗3 o p+1 x τ −ζi−d(H)i,S∗ 3+t−1−(p+1) p+1 xτ −ζi−d (H) i,S∗3+t−1−p p x −1 n {i,p+1} /∈ES∗3 o p+1 =xp+1 xp τ −ζi−d(H)i,S∗ 3+t−p−1,

which is larger than 1 for p < τ − ζi − d(H)i,S∗

3 + t− 2 as xp+1 > xp, and at most 1 for

p ≥ τ − ζi− d(H)i,S∗

3 + t− 2. Thus, p

= p

i = argmaxphp(i, x) =⌊τ − ζi− d(H)i,S∗

3 + t− 1⌋.

Therefore, there exists a K > 0 such that

h(i, x)≤ Khp∗

i(i, x). (B.7)

For all j∈ U, let

Q+j ={i ∈ ¯U :{i, j} ∈ ES∗

3, p

i ≥ j} (B.8)

denote the set of neighbors i∈ ¯U of j∈ U such that xj appears in hp∗

i(i, x) with exponent

+1. (note that i < j for all i∈ ¯U , j∈ U). Similarly, let Q−j ={i ∈ ¯U :{i, j} /∈ ES∗

3, p

i < j} (B.9)

be the set of non-neighbors i∈ ¯U of j such that xj appears in hp∗

i(i, x) with exponent−1.

Furthermore, let Wj ={i ∈ ¯U : p∗i = j}. Thus, the vertices in Wj appear with exponent

τ− ζi− d(H)i,S

3+ t− 1 − j in hp ∗

i(i, x). Then, by the definition of ζi in (5.1)

X i∈Wj ζi+ d(H)i,S∗ 3 − t + j = 2EWj + EWj,VH\Wj + (j− t − |S ∗ 2|)|Wj|. (B.10) This yields Z ∞ 1 · · · Z ∞ xt−1 Y j∈[t] x−τ +ζj−t+1+d (H) j,[t] j k Y i=t+1 ˆ h(i, x)dxt· · · dx1 ≤ ˜K Z ∞ 1 · · · Z ∞ xt−1 Y j∈[t] x−τ +ζj−t+1+d (H) j,[t] j k Y i=t+1 ˆ hp∗ i(i, x)dxt· · · dx1 ≤ ˜K Z ∞ 1 Z ∞ x1 · · · Z ∞ xt−1 Y j∈[t] x−τ +ζj−t+1+d (H) j,[t]+|Q + j|−|Q−j|+(τ −1−j+t+|S2∗|)|Wj|−2EWj−EWj , ˆWj j dxt· · · dx1. (B.11) for some ˜K > 0, where ˆWj = VH\ Wj.

We will now use the uniqueness of the solution of the optimization problem in (3.16) to prove that the integral over xtin (B.11) is finite. First of all,

Q+t ={i ∈ ¯U :{i, t} ∈ ES∗ 3, p ∗ i = t} = {i ∈ Wt:{i, t} ∈ ES∗ 3}, (B.12) so that|Q+t| = d (H) t,Wt, whereas Q−t =∅. (B.13)

(27)

We will now prove that the exponent of xt in (B.11) −τ + ζt− t + 1 + d(H)t,[t]+|Q+t| − |Q−t| + (τ − 1 + |S2∗|) |Wt| − 2EWt− EWt, ˆWt <−1, or (τ− 1 + |S∗ 2|) |Wt| + 1 − τ − |S2∗| − t − 2EWt − EWt, ˆWt+ d (H) t − d (H) t, ¯U \Wt <−1. (B.14)

Define ˆS2= ˆS2∗∪ {t}, ˆS1= ˆS∗1∪ Wtand ˆS3= S3∗\ (Wt∪ {t}). This gives

ESˆ1− ES∗ 1 = EWt+ EWt,S1∗, (B.15) ESˆ1, ˆS3− ES∗ 1,S3∗ = EWt,S∗3 − EWt− EWt,S∗1 − |Q + t| − d (H) t,S∗ 1, (B.16) ESˆ2, ˆS3− ES2∗,S3∗ = d (H) t,S∗ 3 − d (H) t,S∗ 2 − |Q + t| − EWt,S2∗, (B.17) ESˆ1,V1− ES∗ 1,V1 = EWt,V1, (B.18) ESˆ2,V1− ES∗ 2,V1 = d (H) t,V1, (B.19) ESˆ2− ES2∗ = d (H) t,S∗ 2 (B.20)

Because (2.2) is uniquely optimized by S∗

1, S2∗ and S3∗, we obtain |S∗ 1| + 2− τ − k + |S∗ 1| + k1 τ− 1 |S2∗| − 2ES∗ 1− 2ES2∗+ ES1∗,S3∗− ES2∗,S3∗+ ES1∗,V1− ES∗2,V1 τ− 1 > Sˆ1 + 2− τ − k + | ˆS1| + k1 τ− 1 Sˆ2 − 2ESˆ1− 2ESˆ2+ ESˆ1, ˆS3− ESˆ2, ˆS3+ ESˆ1,V1− ESˆ2,V1 τ− 1 (B.21) Plugging in (B.15)–(B.19) and using that k− k1− |S∗1| = |S2∗| + |S3∗| yields

|Wt| + 2− τ τ− 1+ |S∗ 2||Wt| − |S2∗| − |S3∗| + |Wt| τ− 1 −EWt+ EWt,S1∗+ EWt,V1+ EWt,S∗2+ EWt,S∗3− d (H) t,S∗ 2− d (H) t,S∗ 1− d (H) t,S∗ 3 − d (H) t,V1 τ− 1 < 0. (B.22)

Multiplying by τ− 1 then gives

− 1)|Wt| + 1 − τ + |S∗2||Wt| − |S2∗| − |S3∗| + |Wt| − 2EWt− EWt, ˆWt+ d (H)

t <−1.

(B.23) Using that|Wt| ≤ |S3∗| − t then yields

− 1)|Wt| + 1 − τ + |S2∗||Wt| − |S2∗| − t − 2EWt− EWt, ˆWt+ d (H)

t <−1, (B.24)

so that (B.14) also holds.

Thus, the integral in (B.11) over xtresults in a power of xt−1. We can then use a similar

technique to show that the power of xt−1 is also smaller than one, and iterate to finally show that the integral in (B.11) is finite, so that (5.2) is also finite.

Proof of Lemma 5.2. This lemma can be proven along similar lines of Lemma5.1. In partic-ular, it follows the same strategy and computations as in [10, Lemma 7.3], where the factors xixi/(xixj+1) and Ln/(xixj+1) are bounded by terms of min(xixj, 1) and min(1/(xixj), 1)

Referenties

GERELATEERDE DOCUMENTEN

In het kader van de Interacademiale Werkgroep wordt in nauwe samenwerking met het Instituut voor Ziekenhuiswetenschappen te Utrecht met goedkeuring van het Centraal Orgaan

Conclusion 10.0 Summary of findings The enabling conditions for the JMPI innovation have been identified as a supportive strategy, leadership, culture, organisational structure,

The study focuses on how post-2000 Zimbabwe is narrativized in the popular song-genre (expressed mostly in the Shona language) in order to explore the

The naturalization frequency of European species outside of their native range was significantly higher for those species associated with habitats of the human-made category (Table

The invention accords to a laminate at least consisting of a first layer from a mixture of at least a propylene polymer and an ethylene-vinylalcohol copolymer and a second layer from

The electron density as deduced from the measured Stark shift and the calculated shift parameter exceeds the probe-measured value withafactor of more than two and this

Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers) Please check the document version of this publication:.. • A submitted manuscript is

Medewerkers in de zorg en mantelzorgers die te maken hebben met verschillende organisaties die zorg en ondersteuning bieden, merken in de trajecten van In voor Mantelzorg dat