Supplementary Methods

1.Generation and post-processing of segmented data from Affymetrix SNP6.0 arrays

SNP6.0 data were generated at the Broad Institute as part of The Cancer Genome Atlas (TCGA) Pilot Project, and the raw CEL files were normalized to copy number estimates using a GenePattern pipeline (detailed in [10] and Monte et al, manuscript in preparation). Normalized copy number estimates (log2 ratios) were segmented using the Circular Binary Segmentation (CBS) algorithm, followed by median centering the segment values in each sample around 0. These data were obtained from the Level 3 segmented copy number data files that are available for public download from the TCGA Data Portal website at [43].

We additionally removed markers residing within previously annotated regions of germline copy number variation[44]and merged segments with fewer than 10 markers with the closest adjacent segment.We felt that most of these very small segments are likely to have been falsely separated.

2.Deconstruction of segmented copy profiles into ‘underlying’ SCNAs

As described in the main text, we apply an algorithm termed “Ziggurat Deconstruction” (ZD) to the segmented copy data in an attempt to reconstruct the mostlikely sequence of underlying copy number events needed to explain each segmented copy profile. Ziggurat Deconstruction bears a certain analogy to segmentation algorithms, which attempt to maximize the likelihood of the probe-level data on each chromosome given a proposed segmentation model(plus a penalty for increasing model complexity, see Box 1). Ziggurat Deconstruction attempts to maximize the likelihood of observing the segmented copy data on each chromosome given a proposed SCNA history (plus a penalty for model complexity, see Box 2).

BOX1: Segmentation Algorithm

Goal:

Given:

Probe level data x1,x2, … xn and a proposed segmentation.

BOX2: Ziggurat Deconstruction

Goal:

Given:

A chromosomal segmentation profile c and proposed SCNA history hc.

ZD performs this likelihood maximization by iterating between two complementary procedures:

Deconstruction: Converts segmentation profiles into the most likely history of underlying SCNAs, using an estimate for the background rate of SCNAs as a function of length and amplitude (e.g. Pr(e) = f(L,A) for SCNA e of length L and amplitude A).

Background Estimation: Updates the background rate of SCNA formation (e.g. Pr(e) = f(L,A) ) given the sequence of SCNAs inferred from the current deconstruction.

Details

The deconstruction procedure amounts to choosing, from among an enumerated set of possible deconstructions, the one that is most probable given the estimate of the background rate of SCNAs. For simplicity, we assume that all SCNAs are independent, so the probability of observing a given set of SCNAs is equal to the product of the individual probabilities of observing each SCNA under the background model.

for SCNA events ei of length li and amplitude ai.

At the start of the algorithm, the background rate of SCNAs (f(L,A)) is unspecified. We therefore begin by deriving an initial estimate of the background rates of copy number events across all samples using a highly simplified deconstruction procedure that leads to a unique set of SCNAs for each segmented chromosomal profile. This initial deconstruction procedure is based on two assumptions: 1) that each copy number breakpoint represents only a single copy number event, and 2) that copy number gains are never followed by copy number losses, and vice versa. Under these strict assumptions (which we relax later), the more extreme the amplitude of a copy number segment, the later it must have occurred during development. Thus, the evolutionary history of each chromosome can effectively be deconstructed “in reverse” by merging the most extreme amplification or deletion segment on each chromosome with its closest neighbor (recording the amplitude difference and segment length at each step) and repeating until the zero (or diploid) level has been reached (as shown in Supplemental Figure 1a).

At the end of each round of deconstruction, we bin the copy number events from all samples according to their length (expressed as a fraction of a chromosome arm, see below) and amplitude (expressed as change in copy number), and use the frequency of segments residing in each bin as an estimate of the background probability (f(L,A)) of such events in subsequent iterations. To avoid over-fitting the background probability distribution to the initial deconstruction, we smooth the distribution by adding uniformly distributed ‘pseudocounts’ to each bin (equal to 1% of the total number of SCNAs across the dataset).

For subsequent iterations,we expandthe space of evolutionary histories that are considered by allowing for the existence of ‘basal’ copy levels about which the generalZiggurat Deconstruction procedure can be applied (see Supplemental Figure 1b). This expanded framework allows for deletionevents to occur on top of the amplified ‘basal’ copy levels,and viceversa,generating morerealistic deconstructions than the initial iteration. In principle, one could reconstruct all possible histories for a chromosome by fitting up to n+1 ‘basal’ copy levels, where n is the number ofsegment breakpoints on that chromosome. An evolutionary model with k ‘basal’ copy levels can be specified by 2*k-1 free parameters, representing the k copy levels and the k-1 breakpoints between these levels. Because one can always obtain a better fit to the underlying data by increasing the number of free parameters, one mustintroduce a regularizationterm to compare models with different model complexities.

We introduce this regularization penalty using the Bayesian Information Criterion [45]. Specifically, for a proposed SCNA history hc with k basal levels, we calculate the BIC:

The proposed SCNA history with the minimal BIC is the model with the greatest explanatory power and the one that is chosen for each round of deconstruction.

In practice, we cap the number of ‘basal’ copy levels per chromosome to 2, both to limit the computational cost of the procedure and because the vast majority of cancer chromosomes appear to be well-fit by a maximum of 2 copy levels. Intuitively, these ‘basal’ levels can be thought to represent the copy levels of the two chromosome arms, although importantly, we do not constrain them to correspond as such by allowing the breakpoint between them to occur anywhere along the chromosome.

Because the underlying data are segmented, we simplify the search for the parameter values that maximize the likelihood functionPr(c | hc ) by searching over values and breakpoints that are present in the data, and hence we need only consider a finite number of models for each chromosome representing all possible combinations of discrete parameter values. We therefore find the optimal model by enumeration over these finite possibilities. We first loop over the n possiblemodelbreakpoints(includingthepossibilitythatthechromosomehasonlyasinglelevel),and for eachpotentialbreakpoint find the ‘basal’ level or levels that maximizes the total likelihood of all segments given that breakpoint. We then choose the optimal breakpoint by finding the breakpoint whose best model has the minimal BIC.

In theory, one could continue to iterate between deconstruction and background estimation until the optimal parameter values do not change from one round to the next; although wehave not formally proven that this procedure will converge on all datasets, we find that it does converge in practice. By default, we only perform two rounds of likelihood optimization, as we have found that the background distribution and deconstruction parameters tend to converge rapidly on most datasets; however, our code allows users to increase the number of iterations if desired.

To ensure that our procedure is not highly dependent on the initial deconstruction procedure, we tested the ZD procedure using 100 random initializations of the background rate and compared the resulting deconstructions. In every case, the output eventually converged to the same background distribution (data not shown), although in the majority of these cases several iterations were required. This suggests that our optimization procedure is not highly dependent on the initial deconstruction procedure used, and that the final deconstruction obtained by this procedure is likely to represent a robust optimum.

After the final deconstruction, we plot the distribution of segment lengths and amplitude differences across the entire dataset (see Supplementary Figure 2). To allow for the comparison of events occurring on chromosomes of different lengths, we normalize the length of each SCNA by calculating the fraction of each chromosome arm covered by the SCNA; for SCNAs that cross the centromere, the length is expressed as the sum of the fractions of each chromosome arm covered by the SCNA. As noted in the main text, the sharp increase in the frequency of segments occupying exactly 1 chromosome arms allows for the definition of a natural length-based definitionof ‘focal’ and ‘arm-level’ SCNAs. Althoughtheuser can chooseadifferent threshold, by default SCNAs covering greater than 98% of a chromosome arm are considered ‘arm-level’ while those covering less than 98% areconsidered ‘focal’.

3.Probabilistic framework for scoring copy number events

One of the novel aspects of the original GISTIC 1.0 algorithm[15] was that it weighted both the frequency and mean amplitude of copy number alteration at each locus when identifying significantly altered regions, rather than just the frequency of alteration. The original GISTIC G-score, defined as

for marker i, sample log2 copy ratiosaij,and copy-ratio threshold , is equivalent to multiplying the frequency of alteration by the mean amplitude in altered samples. While this score captured the intuitive notion that higher amplitude changes are more likely to represent driver alterations than lower amplitude changes, it did not explicitly represent these likelihoods. Moreover, we wanted to extend the score to incorporate additional features of copy number events that may affect the background rates at which they occur, such as their length or chromosomal location.

We therefore set out to define a general framework for scoring the observed copy number changes in each region according to the negative log of the probability of observing such changes according to a specific background mutation rate of SCNAs. As described in the main text, this is a principled approach to defining a score in place of the arbitrary scores used inGISTIC 1.0 and other copy number methods, and allows for the modeling of variation in the background rate according to specific features of each SCNA.

a. Scoring for individual markers

The biological implication of the arm-level events is unclear, and they may very well target multiple genes or pathways [46];therefore we are typically most interested in searching for regions that are significantly altered by focal events. However, the likelihood of observing a focal event may depend on whether a superimposed arm-level event is present. We therefore generate a score reflecting the probability of observing the set of focal SCNAs given the existence of the observed set of arm-level SCNAs. LetBi={b1, b2, …}represent the set of arm-level SCNAs bi covering marker i, and Fi= {f1, f2, …}represent the set of focal SCNAs fi covering marker i. We define the focal GISTIC score at marker i, FGi, as follows:

.

We assume in this formulation that the focal SCNAs are independent. One may define a similar score for arm-level SCNAs, although this raises additional issues not covered in the present manuscript. Interested readers are referred to [4].

Thus, all that remains is to approximate the function Pr(f | Bi) for focal SCNA f of length L and amplitude A given a model for the background rate of SCNAs. One approach to this background mutation modeling is to approximate Pr(f | Bi) by the frequency of occurrence of focal SCNAs of similar length, amplitude, and arm-level across a large dataset (similar to how the background rate is estimated during the ZD procedure, see above). However, as we described in the main text, this approach carries the potential to underweight driver events, which are generally of greater amplitude and shorter length than the typical SCNA (Supplementary Figure 3) and hence constitute the majority of events in their length/amplitude neighborhood. Thus, we set out to fit Pr(f | Bi) to a functional form that would be relatively insensitive to the presence of driver events in the underlying dataset. To this end, we utilized a comprehensive dataset consisting of 3,131 cancer samples representing over 26 distinct histologic subtypes [4].

We started by noting that, across a wide range of cancers, the frequency of focal copy number events decreases inverselywith the length for lengths up to a single chromosome arm (as in Figure 2b); this correlation is independent of amplitude for all but the highest amplitude copy segments (which are likely enriched for driver events). Since the likelihood that a marker is covered by a focal copy number alteration of any length less than a chromosome arm is roughly constant for all such lengths, the length of an SCNA contributes a constant factor to the log-likelihood of observing the data at a given marker under the background model. Moreover, except for the smallest length and highest amplitude bins, which are likely confounded by a relative abundance of driver events as well as platform-specific dynamic range, the frequency of a marker being covered by focal SCNAs reaching a given total amplitude in a sample decreases exponentially with amplitude [4]. In other words, we find that a reasonable null model for the probability that a marker i is covered by focal SCNAs of total amplitude A is given by:

where is a positive scaling parameter that is fit across all samples (and separately for amplifications and deletions).

The exponential relationship between observed frequency and event amplitude implies that the probabilistic score, defined as the negative log of the probability of observing the data, is linearly proportional to amplitude of the copy number change. As mentioned in the main text, one consequence of this change is an increased sensitivity to high copy-number changes that be highly sensitive to differences in platform dynamic range or variations in probe saturation kinetics as opposed to biologically important differences. To minimize the impact of these effects, we cap the copy number values prior to scoring the genome. Although the cap is technically a free parameter, in practice it can be reliably estimated from the data as the maximal copy value where the plot of log-frequency vs. copy number change is well fit by a linear curve. In our experience applying GISTIC 2.0 to sample sets run on multiple platforms, we have found that utilizing a cap greatly reduces the variability in inter-platform results.

Finally, we used our large cross-cancer dataset to model the dependence of focal copy number changes on the underlying arm-level changes by looking at the distribution of total focal copy number amplitudes as a function of total arm-level amplitude. We observed that focal amplifications were largely independent of their underlying arm-level changes; by contrast, focal deletions showed a strong dependence on arm-level amplitude, such that the probability of observing a focal deletion on top of arm-level change of amplitude –B was

where we introduce  (typically .05) for numerical stability in the exceedingly rare case that the arm-level copy change exceeds 1.

Thus, we derive a relatively simple model for the probability of observing focal copy number events at a given marker that depends only on the total amplitude of the copy change and underlying arm-level (for deletions):

Using the probabilistic framework described above, we calculate separate focal GISTIC scores as follows:

1)Construct focal amplification and deletion genomes by summing the focal amplification and deletion SCNAs in each sample

2)Similarly construct arm-level amplification and deletion genomes by summing the arm-level amplification and deletion SCNAs in each sample

3)Score the observed amplification and deletion profiles in each sample according to –ln(Pr(f | B)) as described above

4) Sum across independent samples to generate the final focal amplification and deletion scores

As with the original version of GISTIC, we calculate the distribution of focal GISTIC amplification and deletion scores expected by chance by random permutation of the marker locations throughout the genome, a procedure that also controls for variations in the rate of SCNA across different samples. One-sided p-values are calculated for each marker as the cumulative fraction of permuted score values that exceed that marker’s amplification or deletion score. Mutliple-hypothesis correction is performed using the Benjamini-Hochberg (BH) FDR method [36].

Finally, because the focal GISTIC score is represented a sum of independent random variables, we are able to utilize a semi-exact approximation method, as described in the original version of GISTIC [15], to derive an accurate approximation to the asymptotic distribution that would be achived by all possible permutations of marker positions. In this approach, the background distribution of focal GISTIC scores is derived by convolution of the distributions of focal GISTIC scores in each sample. As no actual permutations are performed, this allows us to calculate accurate p-values to levels of precisions that could not be computationally achieved by direct permutation.

b. Scoring for genes

As described in the main text, we also defined a modified scoring and permutation procedure (Gene GISTIC) that scores genes rather than markers. This procedure is designed to account for the likelihood of observing all deletion events affecting a single gene unit, even if those deletions are non-overlapping. The scoring procedure begins equivalently to the marker-based scoring defined above. However, for each sample, we first collapse the marker-based probabilities under the null into a gene-based probability as follows: