||This thesis consists of three parts, which devote to three topics on optimization under uncertainty respectively. The first part studies robust simulation of the global warming policies. Integrated assessment models that combine geophysics and economics features are often used to evaluate and compare global warming policies. Because there are typically profound uncertainties in these models, a simulation approach is often used. This approach requires the distribution of the uncertain parameters clearly specified. However, this is typically impossible as there is often a significant amount of ambiguity (e.g., estimation error) in specifying the distribution. In this part, we adopt the widely used multivariate normal distribution to model the uncertain parameters. However, we assume that the mean vector and covariance matrix of the distribution are within some ambiguity sets. We then show how to find the worst-case performance of a given policy for all distributions constrained by the ambiguity sets. This worst-case performance provides a robust evaluation of the policy. We test our algorithm on a famous integrated model of climate change, known as the DICE model. We find that the DICE model is robust with respect to the variances of the parameters, but sensitive to the means and correlations. Furthermore, we find that, based on the DICE model, moderately tight environmental policies robustly outperform the no-control policy and the famous aggressive policies proposed by Stern and Gore. The second part studies optimization problems with value-at-risk (VaR) constraints. Due to the lack of subadditivity, VaR is not a coherent risk measure and does not necessarily preserve the convexity. Thus the problem we consider is typically not provably convex. A natural approach to resolve this difficulty is to use convex conservative approximations (CCAs). Among all CCAs, the conditional value-at-risk (CVaR) approximation is known as the best. We investigate the CVaR approximation from a different perspective and demonstrate what is lost in this approximation. We then show the lost part of this approximation can be remedied using an iterative convex approximation approach, in which each iteration only requires solving a CVaR-like approximation problem that has the same complexity as the CVaR approximation. We show the solution found by this approach generally makes the VaR constraints binding and is guaranteed to be better than the solution found by the CVaR approximation, and moreover, is empirically often globally optimal for the target problem. The numerical experiments show the performances of our approach. The third part studies joint chance constrained programs (JCCPs). JCCPs are often non-convex and non-smooth, and thus are generally challenging to solve. In this part, we propose a logarithm-sum-exponential smoothing technique to approximate a joint chance constraint by the difference of two smooth convex functions and use a sequential convex approximation algorithm, coupled with a Monte Carlo method, to solve the approximation. We call our approach a smooth Monte Carlo approach. We show that our approach is capable of handling both smooth and non-smooth JCCPs where the random variables can be either continuous or discrete or mixed. The numerical experiments further confirm our findings.