**Model Equations for Water Waves and Applications**

JERRY L. BONA, University of Illinois at Chicago

We will discuss systems of partial differential equations the model water waves. After a brief historical introduction, various classes of evolution equations are put forward. These are then used to help understand such diverse geophysical phenomena as tsunami propagation, rogue waves and sand bar formation.

**Approximation of Functions on Unknown Manifolds defined by High-Dimensional Unstructured Data**

CHARLES CHUI, University of Missouri-St. Louis and Stanford University

With the recent rapid technological advancement and significantly lower
manufacturing cost in such areas as image sensor and capture, satellite
and medical imaging, memory devices, computing power, convenient
internet access, low-cost wireless communication, and powerful search
engines, the tremendously huge amount of data information to be
processed and understood is over-whelming. One of the most popular
current approaches to this problem is to represent each piece of
information as a point in a high-dimensional Euclidean space $\RR^{s}$
and consider the collection of such points as a point-cloud ${\mathcal
P}$ that lies on some unknown manifold $\XX\subset \RR^{s}$. For
example, in application to photo library organization and image search
engine, each point in the point-cloud in $\RR^{s}$ represents a digital
image thumbnail, with the dimension $s$ being the maximum resolution of
the image collection. In general, when some pieces of information are
only partially available or corrupted, or when the point-cloud is too
large to handle, a subset ${\mathcal C}\subset {\mathcal P}$ of reliable
data, called training set, is used to process ${\mathcal P}$.

Although the manifold $ \XX $ is unknown, whatever information available
from the point-cloud ${\mathcal P}$ can be used to determine $\XX$
through some symmetric positive semi-definite kernel $K$ defined by the
dataset. However, it is usually not economical or even not feasible to
compute the spectral decomposition of $K$ for a large point-cloud. To
overcome this obstacle, we developed a class of randomized algorithms
for computing the "anisotropic transformation" of the dataset to
re-organize the data without the need of computing the eigenvalues
directly. The transformed dataset then provides a hierarchal structure
for manifold dimensionality reduction, while preserving data topology
and geometry. On the other hand, to apply this manifold approach to such
application areas as pattern recognition, time series event prediction,
and recovery of corrupted or missing data values, certain appropriate
functions of choice defined only on some desired training sets
${\mathcal C}$ must be extended to the entire unknown manifold $\XX$. We
will discuss how data geometry can be incorporated with spatial
approximation to solve this extension problem. In particular, we present
a point-cloud interpolation formula that provides near-optimal degree
of approximation to the (unknown) target functions.

**Random Matrix Theory and Covariance Matrix Estimation**

WEI B. WU, the University of Chicago

I will give an introduction of modern random matrix theory, in particular the asymptotic theory for eigenvalues of sample covariance matrices. Then I will discuss the high dimensional covariance matrix estimation problem. Using the framework of nonlinear processes described in Wiener (1958), I will talk about the convergence of regularized covariance matrix estimates. These results can be applied to the traditional Wiener-Kolmogorov prediction theory.

**Discrete Mathematics**

**Temporal Scale of Processes in Dynamic Networks**

RAJMONDA SULO CACERES, University of Illinois at Chicago

Temporal streams of interactions are commonly aggregated into dynamic networks for temporal analysis. Results of this analysis are greatly affected by the resolution at which the original data are aggregated. The mismatch between the inherent temporal scale of the underlying process and that at which the analysis is performed can obscure important insights and lead to wrong conclusions. To this day, there is no established framework for choosing the appropriate scale for temporal analysis of streams of interactions. Our paper offers the first step towards the formalization of this problem. We show that for a general class of interaction streams it is possible to identify, in a principled way, the inherent temporal scale of the underlying dynamic processes. Moreover, we state important properties of these processes that can be used to develop an algorithm to identify this scale. Additionally, these properties can be used to separate interaction streams for which no level of aggregation is meaningful versus those that have a natural level of aggregation.

**Counting Independent Sets in Triangle Free Graphs**

JEFFREY COOPER, University of Illinois at Chicago

We give a lower bound on the number of independent sets in a triangle free graph with n vertices and average degree d. This extends a result of Ajtai, Komlos, and Szemeredi stating that the independence number of a triangle free graph with n vertices and average degree d is at least c(n/d)logd. This is joint work with Dhruv Mubayi.

**Dynamic Markov Bases**

ELIZABETH GROSS, University of Illinois at Chicago

In this talk, we demonstrate a package for Macaulay2 that generates Markov moves on the fly for decomposable discrete undirected graphical models. Hypothesis testing in statistics can become problematic for large contingency tables. In order to approximate test statistics, one can use the Metropolis-Hastings algorithm to perform a random walk on all contingency tables with the same sufficient sufficient statistics. A Markov basis is a set of moves that ensures such a random walk connects every pair of tables. In practice, a Markov basis should be computed and stored before running a MCMC algorithm such as the Metropolis-Hastings, however, since a Markov basis can be quite large, it is desirable to be able to generate a Markov move as needed. Such a dynamic algorithm is possible to implement with statistical models whose Markov bases have known closed forms like the decomposable discrete undirected graphical models.

**Harmonious Coloring of Trees with Large Maximum Degree**

JAEHOON KIM, University of Illinois at Urbana-Champaign

A harmonious coloring of $G$ is a proper vertex coloring of $G$ such that every pair of colors appears on at most one pair of adjacent vertices. The harmonious chromatic number of $G$, $h(G)$, is the minimum number of colors needed for a harmonious coloring of $G$. We show that if $T$ is a forest of order $n$ with maximum degree $\Delta(T)\geq \frac{n+2}{3}$, then $h(T)=\Delta(T)+2$ if $T$ has non-adjacent vertices of degree $\Delta(T)$, and $h(T)=\Delta(T)+1$ otherwise. Moreover, the proof yields a polynomial-time algorithm for an optimal harmonious coloring of such a forest. This is a joint work with Saieed Akbari and Alexandr Kostochka.

**Choosability with Separation in Graphs and Hypergraphs**

MOHIT KUMBAT, University of Illinois at Urbana-Champaign

For a hypergraph $G$ and a positive integer $s$, let $\chi_{\ell}(G,s)$be the minimum value of $l$ such that $G$ is $L$-colorable from every list $L$ with $|L(v)|=l$ for each $v\in V(G)$ and $|L(u)\cap L(v)|\leq s$ for all $u, v\in e\in E(G)$. This parameter was studied by Kratochv\'{i}l, Tuza and Voigt for various kinds of graphs. In this talk, we present the asymptotics of $\chi_{\ell}(G,s)$for complete graphs, balanced complete multipartite graphs and complete $k$-partite $k$-uniform hypergraphs. This is a joint work with Z. F\"urediand A. Kostochka.

**List Rankings of Paths, Cycles, and Trees**

DANIEL MCDONALD, University of Illinois at Urbana-Champaign

A ranking on a graph is a labeling of its vertices with positive integers such that any path whose endpoints have the same label contains a larger label. The list rank number of a graph G is the least positive integer k such that if each vertex of G is assigned a set of k potential labels, G can always be ranked by labeling each vertex with a label from its assigned list. We compute the list rank number of paths, cycles, and trees with many more leaves than internal vertices.

**Computing the Variety of K-algebra Homomorphisms**

JON YAGGIE, University of Illinois at Chicago

Let $k$ be an algebraically closed field. Let $A$ and $B$ be arbitrary commutative (unitary) $k$-algebras. Assume $V\subset A$ and $W\subset B$ are finite dimensional $k$-linear subspaces. Denote the subalgebras of $A$ and $B$ generated by $V$ and $W$ as $A(V)$ and $B(W)$. Then the set $Hom(A,B,V,W)$ of $k$-algebra homomorphisms $f:A(V)\to B(W)$ such that $f(V)\subset W$ is an affine $k$-variety in a natural way. The structure of the proof of this claim suggests an algorithm could be developed to allow software to calculate the affine variety $Hom(A,B,V,W)$. The goal of my research is to develop software capable of doing this calculation and use this software to compute some classical algebras (e.g. group algebras, monomial algebras etc).Time permitting I will discuss specific applications of these algebraic sets.

**Implicitly Coupled Multiple Time-Scale Electrical Power Grid Dynamics Simulation**

SHRIRANG ABHYANKAR, Illinois Institute of Technology

The existing simulation tools for studying the different electrical power
system dynamics are specifically tailored to a particular range of time scale
and are divided into two groups: Transient Stability Simulators (TS) and
Electromagnetic Transients Simulators (EMT). A transient stability
simulator(TS), running at large time steps, is used for studying relatively slower
dynamics e.g. electromechanical interactions among generators and can be
used for simulating large-scale power systems. In contrast, an
electromagnetic transients simulator (EMT) models the same components in
finer detail and uses a smaller time step for studying fast dynamics e.g.
electromagnetic interactions among power electronics devices. Simulating
large-scale power systems with an electromagnetic transient simulator is
computationally inefficient due to the small time step size involved.

This talk presents a novel implicitly coupled solution approach for doing
a
combined transient stability and electromagnetic transient simulation. To
combine the two sets of equations with their different time steps, and
ensure that the TS and EMT solutions are consistent, the equations for TS
and coupled-in-time EMT equations are solved simultaneously. While
computing
a single time step of the TS equations, a simultaneous calculation of
several time steps of the EMT equations is proposed.

Furthermore a parallel implementation of the developed implicitly coupled
multiple time-scale dynamics simulator, using PETSc, is discussed. Results
of experimentation with different reordering strategies, linear solution
schemes, and preconditioners are presented.

**Quantitative Techniques in Hedge Fund Portfolios ¨C Insights from Industry**

RANJIAN BHADURI, AlphaMetrix LLC

The mathematics of liquidity will be examined, including applied liquidity solutions. Risk Measurement tools such as the Omega Function and how to apply it. Nuggets of wisdom from conducting hedge fund due diligence, and portfolio & risk management.

**Very Large Scale Data Analysis Challenge: A Numerical Linear Algebra Point of View**

JIE CHEN, Argonne National Laboratory

The desire of higher resolution and accuracy has driven modern science producing huge volume of data. In application areas such as nuclear engineering and climate modeling, it is not rare to see data in tera-, peta- or even larger scale. Statistical analysis is a methodology for approaching and understanding these data, which in turn advances the respective scientific fields. One concrete example is the use of a Gaussian process, a well studied and widely used statistical tool, in modeling the data in spatial/temporal domain. In this talk, we consider the underlying linear algebra that is essential in the heart of many Gaussian process tasks, such as sampling, fitting, interpolation and regression. State of the art methods for solving such problems cannot go beyond the scale of probably a few tens of thousands. I will explain the difficulties and challenges from the perspective of numerical linear algebra, and show successful algorithms we recently developed that can handle data larger than this scale by at least two orders of magnitude. There is still a long road towards the goal of peta- or even larger scale data, but we are on the track.

**Applying Iterative Methods to Least-squares Computations of Null Vectors, Eigenvectors, Singular Vectors**

SOU-CHENG CHOI, the University of Chicago and Argonne National Laboratory

For a singular matrix of arbitrary shape, we observe that null vectors
can be obtained by solving least-squares problems involving the
transpose of the matrix. For sparse rectangular matrices, this suggests a
new application of the iterative solvers LSQR and LSMR. In the square
case, MINRES, MINRES-QLP, GMRES, LSQR, or LSMR are applicable.

When a given matrix has a null space (or kernel) of dimension more than
one and we are interested in obtaining multiple null vectors, we apply
our matrix-transpose approach to a sequence of least-squares problems
with carefully-constructed multiple right-hand sides.

MINRES-QLP, LSQR, LSMR are designed with stopping conditions for
handling least-squares problems. New stopping rules are needed for
MINRES and GMRES on singular systems. We present mathematical results
for detecting convergence of null vectors, eigenvectors, and singular
vectors in these Krylov-subspace methods. We give results for computing
the stationary probability vectors for Markov Chain models including
Google's PageRank applied to computer-science literature, and multiple
null vectors for sparse systems arising in helioseismology.

**A Comparison between FMM and Treecode**

HUALONG FENG, Illinois Institute of Technology

Both FMM and treecode are approximate algorithms. Therefore, their precision is an important criterion when comparing them. Two types of errors are of concern here, the maximum error and the average error. Using the average error as a criterion, It is found that FMM is superior to treecode. That is, to achieve the same accuracy, FMM is noticeably faster than treecode. While FMM controls average error well, many applications require a good control of the maximum error. For time dependent applications like fluid dynamics or molecular dynamics, a small error in the current step may lead to a much larger one in a later step.

With the maximum error as the criterion, it is found that the two treecodes both perform as well as the FMM. This is surprising given that the FMM is of complexity O(N ), and the treecodes are of complexity O(N logN ). This is not too difficult to understand. FMM uses an interaction list, which is an invention as it speeds up computations. While this will not affect its control on the average error, using an interaction list hurts the algorithm's capacity to control the maximum error. FMM performs direct summations only for near neighbors, and all the so-called well-separated particles are treated with expansions, no matter what weights those particles carry. Regarding the maximum error, FMM trades precision for CPU time. Introduction of an interaction list saves time, but it sacrifices a certain degree of flexibility. Treecodes do not have this problem as it uses a divide-and-conquer strategy to decide whether to use an approximation or to do direct summations. A significant advantage of treecode is that it is relatively easy to implement.

**Applying Petsc to Atmospheric Systems**

STEVE FROEHLICH, Argonne National Laboratory

A set of equations that govern atmospheric dynamics are referred to as
the primitive equations. Solving this system numerically is a very
complex and time intensive task. A library of numeric tools called PETSc
(Portable, Extensible, Toolkit for Numeric Computation) is utilized to
aid in this task. Applying PETSc to help solve this system provides
opportunities for atmospheric scientists to acquire and use advanced
numerical methods utilized in the PETSc libraries, allows PETSc
developers to test and optimize their library's ability to solve complex
nonlinear systems of equations, and provides the PETSc user community
with an example of how to solve systems of nonlinear partial
differential equations within the PETSc framework.

The aforementioned example is a simplified version of the 2-Dimensional
atmospheric primitive equations and uses a surface energy balance model
to obtain the diabatic changes in the thermodynamic energy equation.
Several scenarios are available for users to manipulate in order to
provide a full picture of the dynamical system and how PETSc solves this
system. Several numerical solvers are also available for use that can
be applied to the system. Such solvers are Backward Euler, GL, and an
implicit variable time-stepping Sundials solver. Other options such as
number of grid points, visualizations, and running in parallel are also
available as runtime command line arguments to interested users.

**Efficient Nonlinear Diffusion Tensor Image Registration Using DCT Basis Functions**

LIN GAN, Illinois Institute of Technology

Image registration is a common task in medical image processing, additional problems need to be considered when diffusion tensor images are used. In this paper a nonlinear registration algorithm for diffusion tensor (DT) MR images is proposed. The nonlinear deformation is modeled using a combination of Discrete Cosine Transformation (DCT) basis functions thus reducing the number of parameters that need to be estimated. This approach was demonstrated to be an effective method for scalar image registration via SPM, and we show here how it can be extended to tensor images. The proposed approach employs the full tensor information via a Euclidean distance metric. Tensor reorientation is explicitly determined from the nonlinear deformation model and applied during the optimization process. We evaluate the proposed approach both quantitatively and qualitatively and show that it results in improved performance when compared to scalar registration via SPM. We further compare the proposed approach to a tensor registration method (DTI-TK) and show improved performance in terms of trace error and Euclidean distance error. The computational efficiency of the proposed approach is also evaluated and compared.

**Numerical Analysis of the Stochastic Moving Boundary Problem**

KUNWOO KIM, University of Illinois at Urbana-Champaign

Moving boundary problems arise in many areas of science such as material science, chemical engineering, biology, and finance, and they are of great importance in the areas of partial differential equations since they characterize phase-change phenomena where a system has two phases. In this talk, we consider the effect of noise on a one dimensional moving boundary problem proposed by Ludford and Stewart and studied by Caffarelli and Vazquez. We especially examine numerical analysis of the stochastic moving boundary problem. Based on a transformation which transforms the stochastic moving boundary problem into a nonlocal nonlinear stochastic partial differential equation, we construct a numerical solution by using the explicit finite difference method and investigate numerical convergence. This is joint work with Richard Sowers.

**High Frequency Equity Performance Attribution**

TINGTING LI, Illinois Institute of Technology

This article aims to propose an extended framework of traditional relative return attribution model. This extended attribution mechanism is applicable to high frequency equity strategies by allowing for absolute dollars gained or lost to net exposure, bucket exposures, long and short stock specific, market making and ETF trading activities.

**What Comments tell us? : A Semi-Supervised Topic Model for Comments Analysis**

SHIZHU LIU, Illinois Institute of Technology

The popular increase in Web 2.0 applications in recent years facilitates interaction between users and consequently have produced a large volume of user comments on different kinds of entities. There is, thus, a need for the development of summarization techniques for such collections. In this paper, we investigate the automated generation of summaries based on user comments for different entities. We provide a formal definition of the problem and propose a semi-supervised generative model to discover similar and supplemental topics in user opinions with respect to the descriptive text provided by a publisher. The most representative sentences in user opinions are classified based on their sentiment and used to construct a summary of the comments. Experimental results on a test collection of 10 different kinds of products demonstrate the effectiveness of the proposed approach.

**Stable Computations with Gaussian RBFs**

MIKE MCCOURT, Cornell University

Existing work using Gaussian, and other, radial basis functions (RBFs) may involve very unstable problems for certain ranges of the "shape" parameter. Specifically, in the flat limit, the linear system associated with RBF interpolation becomes irrevocably ill-conditioned when working in the traditional basis. This work expands on previous research by Bengt Fornberg and others to develop a stable method for working with positive definite RBFs in arbitrary dimensions through an eigenfunction expansion of the kernel. We will consider the interpolation results in one dimension and then develop a general approximation approach for higher dimensions at reduced cost. This approach will also be applied to solve boundary value problems.

**Coalescent-based Method for Inferring Migration Rates from Genetic Data**

DESISLAVA PETKOVA, University of Chicago

Landscape features such as barriers or corridors structure the genetic variation in a species by impeding or facilitating movement across the habitat. We consider a class of demographic models, parameterized in terms of migration rates between neighboring locations, that predict equilibirum genetic structure, and thus allow for Bayesian estimation of the demographic parameters. This approach to modelling the spatial organization of genetic variation is based on Kingman's coalescent, a stochastic process which traces the acenstry of a sample backwards in time. We discuss how the model addresses the inference problem of estimating migration rates from observed genetic differentiation, and we use simulated genotype data to illustrate its performance in the case of a two-dimensional habitat under several models of migration.

**An Examination of Precision Effects on Numerical Solutions of Partial Differential Equations**

PAUL RIGGE, University of Michigan, Ann Arbor

We demonstrate massively parallel scaling of solutions to the three dimensional nonlinear Schrodinger equation. In doing so we find that numerical precision effects determine the accuracy to which numerical solutions can be obtained. We then examine numerical precision effects more closely for the Sine-Gordon equation. We implement high order implicit Runge Kutta solvers using fixed-point iteration and compare diagonally and fully implicit schemes. We find that in quadruple precision, fourteenth order time stepping schemes are very efficient.

**Long-Run Analysis of the Stochastic Replicator Dynamics with Random Jumps**

ANDREW VLASIC, University of Illinois at Urbana-Champaign

We generalize the stochastic version of the replicator dynamics due to Fudenberg and Harris. In particular, we add a random jump term to the payoff function to simulate anomalous events and their effects on the fitness. Assuming a $2 \times 2$ game and using a particular characteristic of the jump functions we are able to estimate the ergodic measure for all games. Furthermore, we devise some general stability results.

**Modeling Vitrification as a Free Boundary between Anomalous and Classical Diffusion**

CHRIS VOGL, Northwestern University

Anhydrobiosis refers to a type of waterless hibernation exhibited by certain organisms. If anhydrobiosis can be obtained in non-anhydrobiotic organisms, this type of preservation would have many advantages over cryopreservation, including the ability to both preserve and store biomedical materials without the need for cryogenic temperatures. However, this process is far from perfected. A better understanding of the vitrification of the organism in a trehalose glass, considered essential in obtaining anhydrobiosis, will help guide more effective development of dessication techniques. Transport through the trehalose glass has been shown to display anomalous subdiffusion behavior and thus governed by an integro-differential equation known as the fractional diffusion equation. Thus, the formation of the glass will be modeled as a moving-boundary problem with both anomalous and classical diffusion. A numerical scheme will be developed to incorporate an implicit discretization scheme with a moving boundary governed by a non-linear equation. Interface propagation speeds will be computed on an infinite domain for the two-phase system and compared to those of one-phase classical and one-phase anomalous systems.

**Progress in Large-Scale Differential Variational Inequalities for Heterogeneous Materials**

LEI WANG, Argonne National Laboratory

Modeling the mesoscale behavior of irradiated materials is an essential aspect of developing a computationally predictive, experimentally validated, multiscale understanding of the thermo-mechanical behavior of nuclear fuel. Phase field models provide a flexible representation of time-dependent heterogeneous materials. We explain how differential variational inequalities (DVIs) naturally arise in phase field models, and we discuss recent work in developing advanced numerical techniques and scalable software for DVIs as applied to large-scale, heterogeneous materials problems.

**The Term Structure of Interest Rate as Random Field under Multi-curve Setting**

SHENGQIANG XU, Illinois Institute of Technology

A new LIBOR market model with interest rate modeled as random field under forward measure is derived. Then it is extended to the multi-curve case, where the curve for generating future LIBOR rates and the curve for discounting cash flows are different. In both new LIBOR market models, caps and swaptions are priced with closed formulas that reduce to exactly the Black formulas.

**Classification Based on Permanental Process with Cyclic Approximations**

JIE YANG, University of Illinois at Chicago

In this talk we introduce a statistical model based on a permanental process for supervised classification problems. Unlike many research work in the literature, we assume only exchangeability instead of independence on observations. Regardless of the number of classes or the dimension of the feature variables, the model may require only 2-3 parameters for fitting the covariance structure within clusters. It works well even if each class occupies non-convex, disjoint regions, or regions overlapped with other classes in the feature space. To calculate the weighted permanental ratio involved, we propose analytic approximations based on its cyclic expansion, which require only polynomial time up to order three. It works well for classification purpose. An application to DNA microarray analysis indicates that the permanental model with cyclic approximations is more capable of handling high-dimensional data. It can employ more feature variables in an efficient way and reduce the prediction error significantly. This is critical when the true classification relies on non-reducible high-dimensional features.

**Counterparty Risk and the Impact of Collateralization in CDS Contracts**

ISMAIL IYIGUNLER, Illinois Institute of Technology

We analyze the counterparty risk embedded in CDS contracts, in presence of a bilateral margin agreement. First, we investigate the pricing of collateralized counterparty risk and we derive the bilateral Credit Valuation Adjustment (CVA), unilateral Credit Valuation Adjustment (UCVA) and Debt Valuation Adjustment (DVA). We propose a model for the collateral by incorporating all related factors such as the thresholds, haircuts and margin period of risk. We derive the dynamics of the bilateral CVA in a general form with related jump martingales. We also introduce the Spread Value Adjustment (SVA) indicating the counterparty risk adjusted spread. Counterparty risky and the counterparty risk-free spread dynamics are derived and the dynamics of the SVA is found as a consequence. We finally mploy a Markovian copula model for default intensities and illustrate our findings with numerical results.

**Time changed Levy process**

JINGRAN LIU, Illinois Institute of Technology

In reality, asset prices jump, leading to non-normal return innovations. Moreover, return volatitlity vary stochastically over time. The Mathematical concept of time-changing stochastic processes can be regarded as one of the standard tools for building financial models, since it can simultaneously address the above issues. Popular models, time-changed Levy processes, where the time-change process is given by a subordinator or an absolutely continuous time change are disucssed. Also we presented the methods of pricing and hedging options.

**Nonlinear Simulations of Vesicle Wrinkling**

KAI LIU, Illinois Institute of Technology

In this paper, we study the wrinkling dynamics of a vesicle in an extensional flow. This work is motivated by the recent experiments and linear theory on wrinkles of a quasi-spherical membrane. It is suggested that the wrinkling instability is induced by the negative surface tension of the membrane due to a sudden reverse of flow direction. Here, we are interested in exploring wrinkle formation and evolution in the nonlinear regime. We focus on a perturbed two-dimensional circular vesicle and show that the linear analytical results are qualitatively independent of the number of dimensions. Hence the two-dimensional studies can provide insights into the full three-dimensional problem. We develop a spectral accurate boundary integral method to simulate the nonlinear coupling between flow and membrane morphology, and the nonlinear evolution of the surface tension. We demonstrate that for a quasi-circular vesicle, the linear theory well predicts the dynamics of wrinkles and their characteristic wave numbers. Nonlinear results of an elongated vesicle show that there exist dumbbell-like stationary shapes if the external flow is weak. For strong flows, wrinkles with pronounced amplitudes will form during the evolution. However, the final equilibrium shape is still elliptic. As far as the shape transition is concerned, our numerical simulations are able to capture the main features of wrinkles observed in the experiments. Interestingly, our numerical results reveal that, in addition to wrinkling, asymmetric rotation can occur for slightly tilted vesicles.

**Simulations of Simple Models for Martenstic Phase Transformations**

BENSON K. MUITE, University of Michigan, Ann Arbor

We numerically examine the difference in martensite pattern formation in a simple geometrically linear viscoelastic model and a simple geometrically nonlinear viscoelastic model. Both models are for the square (austenite) to rectangle (martensite) phase transformation and include a high order capillarity term. The geometrically nonlinear model captures features observed in experiments that the geometrically nonlinear model does not capture.

**Comparison of Continuous and Discrete Kernel Eigenvalue Problems**

MICHAEL MACHEN, Illinois Institute of Technology

Kernels are an important tool for obtaining accurate approximations in areas such as numerical analysis
(meshfree methods), machine learning, and statistics. In many problems, one uses discrete kernel matrices
$\mathsf{K}$, where the entries of $K$ are given by $\mathsf{K}_{ij}$ = $K(x_i, x_j)$, $i, j = 1, 2, 3,\cdots,N$, with $K$ a positive definite kernel.
The points $x_i$; $x_j$ are often referred to as data points and centers and lie within the domain of the kernel $K$.

Associated with this kernel matrix $\mathsf{K}$ is the discrete eigenvalue $\lambda_j^{\ast}$ problem and Associated with the kernel function $K$ is the continuous eigenvalue $\lambda_j$ problem.

We investigate the connection between the continuous and the discrete eigenvalue problem in hope of
approximating $\lambda_j$ by $\lambda_j^{\ast}$. Similarly we study the behavior of eigenvalues to discover a way of representing eigenfunctions. In our
study, we used the Min, Max, Brownian Motion, and the Gaussian kernel.

**Point of Symmetry Theorem in Information Based Function Approximations**

YIZHI ZHANG, Illinois Institute of Technology

In the process of doing scientific computations we always rely on some information. In practice, this information is typically noisy, i.e. contaminated by error. Problems with noisy information have always attracted considerable attention from researchers in many different scientific fields. We are trying to introduce a new theory called Point of Symmetry Theorem which can be widely use in function approximation problems. One can achieve the approximation of a function in a vector space with best error by finding the point of symmetry in the related space. Theorem, proves and examples will be presented.

Copyright © 2011
*Illinois Institute of Technology*
. All rights reserved.