In previous articles (here and earlier here) I covered in some cursory way my current research into the California Current and about the technique of Inverse Modeling, so today I wanted to delve into the actual tools and code that I’ve developed. To review, an inverse model is
An inverse model seeks to solve a series of linear equations (Soetaert and Van Oevelen 2014) which state our understanding of the nutrient flows involved in the ecosystem under study. For example, in our study we will be looking at carbon flows between plankton, fish, and various organic carbon pools (e.g. microzoa, sardines and detritus). This series of linear equations is then codified as a set of matrix equations.
Without delving too far into the linear algebra, which is beyond the scope of this article, these sets of equations represent–in the simplest case–the mass balance equations (1); the various field measurements (2); and a wide variety of physiological factors including growth rates and efficiencies, and observations (3).
The focus here on out will be on the code and any discussion of the theory, mathematics or oceanography involved in the research will be limited to that which is directly applicable to the code at hand. Links and notes for relevant aspects will be provided wherever possible. Let’s get into it then.
To provide a little bit of an overview, which I think is important whenever details might obscure larger scale structure, here is a diagram of the code hierarchy.

The primary entrance point to the code is through a file called ‘AnalysisHead.r’ and it allows us to run the model, analyze the results and other large scale tasks through it’s subordinate scripts. It also handles the primary UI (command line) and help functionality so it’s code is rather unpleasant to step through. I’ve opted to include the entire source code of the all the files in the Appendix at the end of the article, and throughout I will use code snippets and representations to illustrate the process.
To begin off we’ll start at the first commands to be executed and move on from there.
require(compiler) enableJIT(3) compilePKGS(enable=TRUE) args <- commandArgs(trailingOnly = TRUE) ## Argumens of the form: <id> <options> head(args)
The first calls all have to do with optimization[1] and are merely interesting to those esoteric few which find that stuff interesting (whom I count myself as one), but after those comes the real start. We take the args from the command line and pass them to the head() function which processes our input and starts the whole machine running. By using command line arguments we save ourselves from hard coding everything and makes trial-and-error time much lower.
Here is sample command-line call that initiates, -i, a new model run for data from cycle 3 and saves the results as 001.
Rscript AnalysisHead.r -i 3 001
This script calls MCMC.r which does all the setup for the model by reading in spreadsheets, organizing data and ultimately doing everything up until the simulation is actually running. The majority of the code here is simply picking values off of the spreadsheet and saving them in an organized vector or matrix, for example take a look at the code below.
## Read in the Ae matrix from spreadsheet Ae = t(data.matrix(sheetA)[,1:exact_eq]) ## Pull specific values for h vector h[[12]] = sheetB[58] #Smz-mn2 = Small Gr Min Res (100-surf) h[[13]] = sheetB[60] #Lmz-mn2 = Lg Gr Min Res (100-surf) h[[14]] = sheetB[62] #Gel-mn2 = Gel pred Min Res (100-surf) h[[15]] = sheetB[57] #DeepSmz-mn2 = Sm Gr Min Res (450-100m) h[[16]] = sheetB[59] #DeepLmz-mn2 = Lg Gr Min Res (450-100m) h[[17]] = sheetB[61] #DeepGel-mn2 = GelPred Min Res (450-100m) h[[18]] = sheetB[56] #Sar = Sardine Min Res
Once MCMC.r has read in the model’s structure, the matrices ,
and
are built along with their respective vectors
,
and
which together codify all the constraints of the inverse model. The matrices and vectors are then passed off to Xsample who is the workhorse of the Monte Carlo method employed in my work.
sol = xsample(A=Aa,B=ba, E=Ae, F=be, G=Gg, H=h, sdB=sdb, iter=iter_count, outputlength=iter_out, jmp=jmp, burninlength=iter_burn, type="mirror")
After a mater of minutes or hours, the Xsample script returns a data-frame[2] of possible model solutions along with the mean and standard deviation of all the solutions. The quantity of output, typically 1000’s of vectors, is the main asset of using a monte carlo method to explore the solution space of the model.
Let’s make sure we’re all on the same page, here is a script that should give a pretty good idea of what a Monte Carlo method is.
MC = function(n) { ## A = pi*r^2 ## First, let's plot a cicle x = seq(0,1,0.01) plot(x=x, y = sqrt(1-x^2), xlim=c(0,1), ylim=c(0,1), type='l', ylab="y") ## Then generate points and plot them naccept = 0 for (i in 1:n) { x = runif(1) ## Random x y = runif(1) ## Random y if (x^2+y^2 < 1) { ## Does point lie inside circle? naccept = naccept + 1 points(x,y, pch=3) } else points(x,y,col="red",pch=3) } text(x=0,y=0.1,paste(naccept*4.0/n)) }
There are numerous formulas[3] to calculate pi, but don’t think that makes it easy. Here is a personal favorite[4]:I’m pretty sure you need a masters in theoretical mathematics just to make sense of the formulas, so I’m going to calculate it with some brute force(i.e. a Monte Carlo simulation). To do this all you need is to check if a randomly generated point lies inside or outside of a circle. If my high school geometry class was right[5], a point is inside a unit circle if
.

By running the script for , we are testing it for 10 points, and then by taking the ratio of the points inside the circle to outside the circle we can find the value of
. When only using 10 points, the estimated value of
, but by taking more samples we improve our estimate. With 100,000 samples the script finds
which is accurate to 0.0002 percent. Not bad for a method that uses random numbers and some simple geometry.
For my work, there is no simple geometric analog, since instead of working in 2-D it is in 105-D; but nevertheless it works in much the same way. Xsample also uses some fancy algorithms to negotiate the region and does it’s best to give us an idea of the solution’s shape and size[6].
The majority of the Xsample script was written by Karel Van den Meersche et al.[7] and can be found in the limSolve package on CRAN[8], so the version I present here is optimized for my usage. Once the Xsample script returns, the MCMC.r function simply saves the data and generates some human-readable spreadsheets before closing.
save(sol, Aa,Ae, ba, be,Gg,h,sdb,cond, file=paste("./data/", cond\pi
hash) } ## Continue Runs for elements of <l> ## continue = function(l) { cond = data.frame(cycle=4, iter_count=2500000, iter_out=1000, iter_burn=1000000, jmp=4, hash="00000000") for (i in c(1:length(l))){ continueRun(l[i],cond) } } ## Continue Runs for elements of <l> based on original cond ## continue2 = function(l) { for (i in c(1:length(l))){ print(paste("Running continue run 1 for", i)) continueRun2(l[i]) } } ## Run Analysis for elements of <l> ## for (i in c(1:length(l))) { # analyze(l[i], c(1,1,1,1,1,1,1)) #} ## Analyze set for elements in <l> anset = function(l) { analyzeSet(l,plots=c(1,1,1,1,1,1,1)) } ## Analyze run data ana = function(l) { for ( i in 1:length(l)) { analyze(hash, c(1,1,1,1,1,1,1)) } } ####### The Command structure print(args[1]=='-h') if(length(args) < 1) { print("Please specify an option") help() } if (args[1] == '-h') { help() } if (args[1] == '-i') { if (length(args) == 3) { init(args[2], args[3]) } else { print("Insufficient argumetns") help() } } if (args[1] == '-a') { if (length(args) == 2) ana(c(args[2])) else { print("Insufficient argumetns") help() } } } head(args)
#' MCMC2.r - 2015 Thomas Bryce Kelly #' #' This script is the first stage of the inverse modeling workflow. Here the excel model is imported and the matricies and arrays are generated. Finally xsample() is called using <cond>. The important function calls are: #' * XLConnect:loadWorkbook(path) : Loads the workbook #' * XLConnect:readWorksheet(workbook, sheet, ...) : Reads data from workbook. #' * misc:log(string, cond) : This uses the method specified in misc.r to save to the ./misc.csv file the <string> followed by a list of values within <cond>. Useful for later reference. #' * Twitter.r:up(title, cond, name) : This function uploads the file <name>, logs the upload, and notifies via twitter that <title> was uploaded. #' * Twitter.r:notify(title, cond, name) : This function notifies via twitter that <name> has been uploaded (must be done separately) and logs the event. #' * save() : Standard system save(). By default model data is saved in ./data/<condX to initalize x0. This can be changed. #' #' <cond> : data.frame(cycle=NULL, iter_count=NULL, iter_out=NULL, iter_burn=NULL, jmp=NULL, hash=NULL) mcmc = function(cond=data.frame(cycle=1, iter_count=100000, iter_out=100, iter_burn=10000, jmp=1, hash=NULL) { library(XLConnect) library(Matrix) library(MASS) library(ggplot2) library(Hmisc) library(limSolve) library(digest) library(diagram) source("xsample.r") source("Twitter.r") source("misc.r") ## Setup environment WB = loadWorkbook("./Spreadsheets/EndToEnd2Layer_TK105.xls") log(paste("Loaded Workbook",WB@filename),cond) start_cell = c(5,6) exact_eq = 23 approx_eq = 14 inequal = 79 flows = 105 if(is.null(cond
cycle])/2.0 sheetB = sheetB[,2*cond
cycle, "run for", cond
jmp, "and burnin period is", cond
iter_count, outputlength=cond
jmp, burninlength=cond
labels = lab log("Solution found.", cond) ##Save Data log(paste("Saving data to ./data/", cond
hash, ".RData",sep='') ) library(XLConnect) newwb = loadWorkbook(paste("./data/Solutions-",cond
X = apply(sol
avg = apply(Ra
sd = apply(Ra
avg,sol
X, sheet="Raw") writeWorksheet(newwb, cbind(Ra
avg,Ra
avg[WB2_data[j,i+1]] } } } rownames(flow) = boxes colnames(flow) = boxes newwb = loadWorkbook(paste("./data/Flows-",cond
avg,sol
double.eps), x0=NULL, fulloutput=FALSE, test=TRUE) { ##### 1. definition of internal functions ##### ## function ensuring that a jump from q1 to q2 ## fulfills all inequality constraints formulated in g and h ## gq=h can be seen as equations for planes that are considered mirrors. ## when a jump crosses one or more of these mirrors, ## the vector describing the jump is deviated according to rules of mirroring. ## the resulting new vector q will always be within the subspace of R^n for ## which all inequalities are met. ## also are the requirements for a MCMC met: the probability in the subspace ## is constant, the probability out of the subspace is 0. ## q1 has to fulfill constraints by default! ## Karel Van den Meersche 20070921 mirror <- function(q1,g,h,k=length(q),jmp) { ##if (any((g%*%q1)<h)) stop("starting point of mirroring is not in feasible space") q2 <- rnorm(k,q1,jmp) if (!is.null(g)) { residual <- g%*%q2-h q10 <- q1 while (any(residual<0)) { #mirror epsilon <- q2-q10 #vector from q1 to q2: our considered light-ray that will be mirrored at the boundaries of the space w <- which(residual<0) #which mirrors are hit? alfa <- {{h-g%*%q10}/g%*%epsilon}[w] #alfa: at which point does the light-ray hit the mirrors? g*(q1+alfa*epsilon)-h=0 whichminalfa <- which.min(alfa) j <- w[whichminalfa] #which smallest element of alfa: which mirror is hit first? d <- -residual[j]/sum(g[j,]^2) #add to q2 a vector d*Z[j,] which is oriented perpendicular to the plane Z[j,]%*%x+p; the result is in the plane. q2 <- q2+2*d*g[j,] #mirrored point residual <- g%*%q2-h q10 <- q10+alfa[whichminalfa]*epsilon #point of reflection } } q2 } norm <- function(x) sqrt(x%*%x) #### 2. the xsample function #### ## conversions vectors to matrices and checks if (is.data.frame(A)) A <- as.matrix(A) if (is.data.frame(E)) E <- as.matrix(E) if (is.data.frame(G)) G <- as.matrix(G) if (is.vector(A)) A <- t(A) if (is.vector(E)) E <- t(E) if (is.vector(G)) G <- t(G) if ( !is.null(A) ) { lb <- length(B) lx <- ncol(A) ## system overdetermined? M <- rbind(cbind(A,B),cbind(E,F)) overdetermined <- !qr(M)
residualNorm>1e-6) stop("no particular solution found;incompatible constraints") else x0 <- l
v #transformation q <- t(v)q for better convergence a <- a%*%v #transformation a <- av if (!is.null(G)) g <- g%*%v #transformation g <- gv Z <- Z%*%v #transformation Z <- Zv ## if overdetermined, calculate posterior distribution of S in Ax=N(B,S) ## Marko Laine 2008, thesis on adaptive mcmc ## S = 1/sd^2 of model ## prior n0=lb ## prior SSR0=n0*s0^2=sum((Ax0-B)^2)=sum(b^2) ## if underdetermined: S=1 ## if overdetermined: S is sampled from a ## posterior gamma distribution (Laine 2008) and ## standard deviations of data are S^-.5 if (estimate_sdB) { q0 <- lsei(a,b)
avg = apply(xsample
sd = apply(xsample
med = apply(xsample
iqr = apply(xsample
hash.RData. This can be changed. # * png(tempname) : All plots generated will be saved as png format in ./img/tempname. # * misc.r:save2(name, cond) : This call is designed to upload and log a file. By default it will use the full relative path, <name>, when saving. # * Twitter.r:up(title, cond, name) : This function uploads the file <name>, logs the upload, and notifies via twitter that <title> was uploaded. # * Twitter.r:notify(title, cond, name) : This function notifies via twitter that <name> has been uploaded (must be done separately) and logs the event. # # analyze() # analyzeSet() # # <cond> : data.frame(cycle=NULL, iter_count=NULL, iter_out=NULL, iter_burn=NULL, jmp=NULL, hash=NULL) # <list> : c(0,0,0,0,0,0,0) library("digest") library("lattice") library("XLConnect") library("diagram") source("Twitter.r") source("misc.r") analyze = function(hash,plots) { load(paste("./data/", hash, ".RData", sep="")) flows = length(sol
X[]), main=paste("Correlation Between Flows: cycle", cond
cycle, tempname) } if(plots[2] == 1) { #res = data.frame(Aa %*% as.numeric(ba), apply(sol
avg[1:54]), labels=sol
flow, xlab="Carbon Flux (mg C /m^2 /d)", main=paste("Flows in Upper Layer: cycle", cond
avg[1:54]+sol
avg[1:54]-sol
avg[1:54]+sol
med, 1:flows, col="red") #segments(sol
iqr/2, 1:flows, sol
iqr/2, 1:flows, col="red") dev.off() ##Save and log the image. save2(tempname, cond) notify("SurSol", cond, tempname) } if(plots[3] == 1) { tempname = paste("./img/GPPHist",hash, ".png", sep='') png(tempname, res=72) plot(c(1:length(sol
X[,1], xlim=c(1,length(sol
cycle) , xlab=paste("Solution number (x",cond
iter_out,")"), ylab="GPP to Phy" ) #dotchart(x = as.vector(sol
labels
cycle ",1), pch=16) dev.off() ##Save and log the image. save2(tempname, cond) #notify("GPPH", cond, tempname) } if(plots[4] == 1) { Ra=NULL Ra
X, 1, function(x) Aa %*% x) Ra
X, 1, mean) Ra
X, 1, sd) tempname = paste("./img/Approx",hash, ".png", sep='') png(tempname) par(mar=c(5,5,5,5)) plot(Ra
avg),max(ba+sdb, Ra
sd)), ylab=expression(Carbon~Flux~(mg~C~m^{-2}~d^{-1})) ,xlab="Approximate Equation" ,main=paste("Comparison of Quantities: cycle", cond
avg+Ra
avg-Ra
avg+Ra
avg+Ra
avg-Ra
avg-Ra
avg, pch=5, col="red") dev.off() ##Save and log the image. save2(tempname, cond) #notify("Approx", cond, tempname) } if(plots[4] == 1) { Ra=NULL Ra
X, 1, function(x) Aa %*% x) Ra
X, 1, mean) Ra
X, 1, sd) tempname = paste("./img/Approx_Zoom",hash, ".png", sep='') png(tempname) par(mar=c(5,5,5,5)) plot(Ra
avg[5:14]+Ra
cycle)) axis(side=1, at=c(1:10), label=c(5:14)) segments(1:10-0.02, Ra
sd[5:14], 1:10-0.02, Ra
sd[5:14]) segments(1:10+0.05, Ra
sd[5:14], 1:10-0.05, Ra
sd[5:14], col="black") segments(1:10+0.05, Ra
sd[5:14], 1:10-0.05, Ra
sd[5:14], col="black") points(ba[5:14], pch=4, col="red") segments(1:10+0.02, ba[5:14]+sdb[5:14], 1:10+0.02, ba[5:14]-sdb[5:14], col="red") segments(1:10+0.05, ba[5:14]+sdb[5:14], 1:10-0.05, ba[5:14]+sdb[5:14], col="red") segments(1:10+0.05, ba[5:14]-sdb[5:14], 1:10-0.05, ba[5:14]-sdb[5:14], col="red") #legend(11,100,c("Model", "Measured"), pch=c(16,4), col=c("black","red")) #points(Ra
X[,2]+sol
X[,8] )/sol
X[,11]+sol
X[,13] )/(sol
X[,43]+sol
X[,45]) #GGElab[3] = "Mic" t3 = 1-(sol
X[,17]+sol
X[,4] + sol
X[,49] + sol
X[,46]) #GGElab[4] = "Smz" t4 = 1-(sol
X[,22]+sol
X[,70] + sol
X[,72])/(sol
X[,10] + sol
X[,50] + sol
X[,68] + sol
X[,27] + sol
X[,29]+sol
X[,75] + sol
X[,6] + sol
X[,16] + sol
X[,73] + sol
X[,101]) #GGElab[6] = "Deep Hnf" t6 = 1- (sol
X[,61] + sol
X[,93] +sol
X[,98]) #GGElab[7] = "Deep Mic" t7 = 1- (sol
X[,66] + sol
X[,63] + sol
X[,96] + sol
X[,30] + sol
X[,32] + sol
X[,81] + sol
X[,24] + sol
X[,42])/(sol
X[,92])/(sol
cycle)) axis(side=1, labels=GGElab, cex.axis=1.8 ,at=c(1:9), las=2) axis(side=2, at=c(0.1,0.2, 0.3, 0.4, 0.5), cex.axis=2) rect(1:9-.2, Gmin, 1:9+.2, Gmax,density=5, col="white", border="black", lty=4) segments(1:9,GGEavg-GGEsd , 1:9, GGEavg+GGEsd, col="red", lty=1) legend(7.4,.53,c("Model","Literature"), lty=c(1,4), col=c("red","black"), bty="n") dev.off() ##Save and log the image. save2(tempname, cond) notify("GGE", cond, tempname) } ## Food Web if(plots[6] == 1) { tempname = paste("./img/Web1",hash, ".png", sep='') png(tempname) typ = "character" for (i in 1:30) { typ = c(typ, "numeric") } WB2 = loadWorkbook("./Spreadsheets/End2End2LayerCompartments.xlsx") WB2_data = readWorksheet(WB2, sheet="Sheet1", 2, 1,29, 30, colTypes=typ ) boxes = WB2_data[,1] flow = matrix(0.0, nrow=27, ncol=27) for(i in 1:(length(WB2_data[1,])-1) ) { ## 1:27 for (j in 1:length(WB2_data[,1])) { ## 1:27 if (!is.na(WB2_data[j,i+1])) { flow[j,i] = sol
X[1:500,1]) xy = density(sol
X[,1])), main=paste("Distribution of GPP values: cycle",cycle), xlab="Carbon Flux") plot(xy, main=paste("Distribution of GPP values: cycle", cond
y,xx
x,xx
x,xx
x)*.95, max(xx
y),max(xx
X[,flows]/mean(sol
X, 1, function(x) Aa %*% x)) tempname = paste("./img/DeepSol",hash, ".png", sep='') png(tempname, height=1280, width=720, res=120) dotchart(x = as.vector(sol
lab
cycle) , pch=16, xlim=c(0,max(sol
sd[54:59])) ) #dotchart(x = as.vector(sol
lab
cycle) , pch=16, xlim=c(0,max(sol
sd[61:flows])) ) segments(sol
sd[54:59], 1:6, sol
sd[54:59], 1:6) #points(sol
med-sol
med+sol
cycle), xlab="Flow", ylab="Flow", col.regions=rgb.palette(250), cuts=500, at=seq(-1,1,0.05)) print(thisplot) dev.off() ##Save and log the image. save2(tempname, cond) #notify("Cor", cond
X[,sur]), main=list(paste("Correlation of SDT flows: cycle", cond
X[,sur]), main=list(paste("Correlation of LDT flows: cycle", cond
avg RS = Aa %*% sol
cycle)) segments(ba[1]+sdb[1], RX[1], ba[1]-sdb[1], RX[1]) segments(ba[1]+sdb[1], RX[1]*0.98, ba[1]+sdb[1], RX[1]*1.02) segments(ba[1]-sdb[1], RX[1]*0.98, ba[1]-sdb[1], RX[1]*1.02) segments(ba[1], RX[1]+RS[1], ba[1], RX[1]-RS[1], col="red") segments(ba[1]*0.98, RX[1]+RS[1], ba[1]*1.02, RX[1]+RS[1], col="red") segments(ba[1]*0.98, RX[1]-RS[1], ba[1]*1.02, RX[1]-RS[1], col="red") } } #legend(2600,800,c("Model", "Measured"),lty=1, col=c("black","red")) #points(Ra
avg RS = Aa %*% sol
cycle)) segments(ba[2]+sdb[2], RX[2]*0.98, ba[2]+sdb[2], RX[2]*1.02) segments(ba[2]+sdb[2], RX[2], ba[2]-sdb[2], RX[2]) segments(ba[2]-sdb[2], RX[2]*0.98, ba[2]-sdb[2], RX[2]*1.02) segments(ba[2], RX[2]+RS[2], ba[2], RX[2]-RS[2], col="red") segments(ba[2]*0.98, RX[2]+RS[2], ba[2]*1.02, RX[2]+RS[2], col="red") segments(ba[2]*0.98, RX[2]-RS[2], ba[2]*1.02, RX[2]-RS[2], col="red") } } #legend(1200,400,c("Model", "Measured"),lty=1, col=c("black","red")) #points(Ra
avg RS = Aa %*% sol
cycle)) segments(ba[3]+sdb[3], RX[3]*0.98, ba[3]+sdb[3], RX[3]*1.02) segments(ba[3]+sdb[3], RX[3], ba[3]-sdb[3], RX[3]) segments(ba[3]-sdb[3], RX[3]*0.98, ba[3]-sdb[3], RX[3]*1.02) segments(ba[3], RX[3]+RS[3], ba[3], RX[3]-RS[3], col="red") segments(ba[3]*0.98, RX[3]+RS[3], ba[3]*1.02, RX[3]+RS[3], col="red") segments(ba[3]*0.98, RX[3]-RS[3], ba[3]*1.02, RX[3]-RS[3], col="red") } } #legend(250,70,c("Model", "Measured"),lty=1, col=c("black","red")) #points(Ra
avg RS = Aa %*% sol
cycle)) segments(ba[4]+sdb[4], RX[4]*0.98, ba[4]+sdb[4], RX[4]*1.02) segments(ba[4]+sdb[4], RX[4], ba[4]-sdb[4], RX[4]) segments(ba[4]-sdb[4], RX[4]*0.98, ba[4]-sdb[4], RX[4]*1.02) segments(ba[4], RX[4]+RS[4], ba[4], RX[4]-RS[4], col="red") segments(ba[4]*0.98, RX[4]+RS[4], ba[4]*1.02, RX[4]+RS[4], col="red") segments(ba[4]*0.98, RX[4]-RS[4], ba[4]*1.02, RX[4]-RS[4], col="red") } } #legend(300,120,c("Model", "Measured"),lty=1, col=c("black","red")) #points(Ra
avg[2:8]), labels=sol
flow[2:8], xlab="Carbon Flux (mg C /m^2 /d)", main="Impact of constraints on Phy Flows" , pch=16, xlim=c(0,max(sol
sd[2:7])*2) ) #segments(sol
sd[1:54], 1:flows, sol
sd[1:54], 1:flows) for(i in c(1:6)) { segments(sol
avg[i+2],i+1, col="black") } ptcol = colorRampPalette(c("red", "green","blue"))(n=10) for (i in c(2:length(l))) { if (!is.na(l[i])) { load(paste("./data/", l[i], ".RData", sep='')) points(as.vector(sol
avg[k+1],k,sol
avg[1:54]-sol
avg[1:54]+sol
avg, pch=5, col="red") dev.off() ##Save and log the image. save2(tempname, cond) notify("SetSurface", cond, tempname) } if (plots[4] == 1) { ## barplot tempname = paste("./img/PhyFlows-",hash,".png",sep='') png(tempname) poop_sur = NULL poop_deep = NULL ext = NULL for (i in c(1:length(l))) { if (!is.na(l[i])) { load(paste("./data/", l[i], ".RData", sep='')) poop_sur[i] = sol
avg[36]+sol
avg[82]+sol
avg[90] #ext[i] = sol
avg[99]+sol
avg[41]+sol
avg[37]+sol
avg[87]+sol
avg[1]-sol
avg, pch=5, col="red") dev.off() ##Save and log the image. save2(tempname, cond) notify("SetPhyFlow", cond, tempname) } }