RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제
      • 좁혀본 항목 보기순서

        • 원문유무
        • 음성지원유무
        • 학위유형
        • 주제분류
          펼치기
        • 수여기관
        • 발행연도
          펼치기
        • 작성언어
        • 지도교수
          펼치기

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • Foam Transport in Porous Media: In-Situ Capillary Pressure Measurement and Application to Enhanced Heavy Oil Recovery

        Vavra, Eric D ProQuest Dissertations & Theses Rice University 2021 해외박사(DDOD)

        RANK : 235295

        소속기관이 구독 중이 아닌 경우 오후 4시부터 익일 오전 9시까지 원문보기가 가능합니다.

        Aqueous foam flow in porous media has been the subject of an increasing number of studies in recent years. Foam is a dynamic colloid that can exhibit unintuitive properties when flowing in porous media; thus, foam experiments often produce unclear or conflicting results. With potentially lucrative applications ranging from enhanced oil recovery (EOR) to subterranean CO2 storage, there is great incentive to understand the fundamental physiochemical processes that accurately describe and predict the nature of flowing foam in porous media.One important aspect of foam flowing in porous media is stability. Many variables, such as quality of the foam, permeability of the medium, velocities of the phases, and type of gas, can influence foam stability. Classically, foam strength is thought to be governed by the stability of liquid lamellae that separate individual gas bubbles and by the a “limiting” capillary pressure above which foam lamellae rupture. In this thesis, a custom probe was designed and constructed for directly measuring in-situ capillary pressures of foam in porous media. Foam quality scan experiments were conducted primarily in a 143-Darcy sand pack with AOS14-16-stabilized N2 foam at ambient lab conditions and constant gas flow rates. Capillary pressure was observed to increase with increasing foam quality before plateauing over a range of qualities in the low-quality regime. Then, in contrast to the classical view, capillary pressure decreased with increasing foam quality in the high-quality regime. The measured capillary pressure decreases were correlated with in-situ observations of increasing bubble size. These general trends occurred regardless of gas velocity over the range of velocities that were tested. Increasing velocity led to increasing transition foam qualities and plateau capillary pressures. This finding implies that the foam mechanisms which are a function of velocity, such as foam generation by lamella division, were significant in determining the behavior of the foam in porous media.Additionally, several other findings improved understanding of foam flow in the sand pack. A nearly constant transition liquid velocity, separating the low- and high-quality regimes, was identified regardless of gas velocity. The rheology of the N2 foam was found to be shear thinning in the low-quality regime and described by a power law model with an exponent of -0.9. In the high-quality regime, the behavior of the coarse bubble and continuous-gas flow systems was weakly shear thinning or, at the slowest velocities, nearly Newtonian as expected for gas flow alone. Comparing gas composition with N2 or CO2 tests revealed the same transition foam quality but different apparent viscosities and capillary pressures. Trends with absolute pressure and temperature are also discussed.An application of interest in this thesis is foam EOR. Generally, foams collapse in the presence of crude oil, but foaming formulations can be chemically engineered to interact synergistically with oil. In this thesis alkali-surfactant-foam (ASF) EOR for the recovery of viscous and heavy oils was documented. For this process, careful characterization of the physiochemical interactions among aqueous, oleic, gas, and solid phases is a must. To aid in this, a novel phase-behavior viscosity map was developed to conveniently select optimal injection conditions. The map is constructed from phase behavior test results as a function of log(added salinity) vs. soap fraction and from viscosities measured by the falling sphere method. For the viscous oil that was tested, conditions resulting in low-viscosity oil-in-water (O/W) emulsions were the most favorable. The characteristic soap fraction was selected as a benchmark to relate dynamic flow behavior in micromodel experiments to static phase behavior in sealed pipettes.Microfluidic devices have proven to be useful for visualizing and confirming flow processes of foam in porous media that would otherwise be much more challenging to observe. For this reason, microfluidic devices mimicking porous media were designed for multiphase flow characterization. A detailed description for the construction of oil-resistant polymer micromodels is provided in this thesis. This micromodel platform was utilized to conduct four microflooding experiments. Foam was found to be stable across all flooding experiments. The experimental results at different characteristic soap fractions and salinities were found to be consistent with predictions made based on the phase-viscosity map. The microfluidic platform also provided new insights into the role of wettability alteration and emulsion formation. In the most hydrophilic case (FE1-), 90% of the 5,855 cP heavy oil was recovered at an apparent viscosity of 820 cP. This result was made possible by wettability alteration towards water-wet and the formation of low apparent-viscosity O/W macroemulsions. Conversely, the most hydrophobic case (FE2) resulted in a lower total oil recovery (70%) accompanied by a large increase in apparent viscosity, likely due to the formation of water-in-oil (W/O) macroemulsions, as predicted by referencing the phase-behavior viscosity map. Additionally, wettability alteration and bubble-oil pinch-off were identified as contributing mechanisms to the formation of O/W macroemulsions in the more hydrophilic flooding experiments. Foam was more effective at recovering oil in these cases presumably due to more favorable mobility control.

      • A Systematic Study of Short and Long Range Interactions in Associating Fluids Using Molecular Theory

        Fouad, Wael A ProQuest Dissertations & Theses Rice University 2016 해외박사(DDOD)

        RANK : 235279

        소속기관이 구독 중이 아닌 경우 오후 4시부터 익일 오전 9시까지 원문보기가 가능합니다.

        Parameters needed for the Statistical Associating Fluid Theory (SAFT) equation of state are usually fit to pure component saturated liquid density and vapor pressure. In this thesis, other sources of information such as quantum mechanics, infinite dilution properties, Fourier transform infrared (FT-IR) spectroscopy and molecular dynamic (MD) simulation are used to obtain a unique set of parameters for complex fluids such as water and alcohols. Consequently, the equation of state can be more predictive and the parameters are not anymore system dependent. Moreover, the four vertices of the molecular thermodynamic tetrahedron (phase equilibrium experiments, spectroscopy, MD simulation and molecular theory) are used to study the distribution of hydrogen bonds in water and alcohol containing mixtures. The new sets of physical parameters and the knowledge gained in studying hydrogen bonding are then applied to model water content of sour natural gas mixtures as well as the phase behavior of alcohol + n-alkane and alcohol + water binary systems. Accurate determination of the water content in hydrocarbons is critical for the petroleum industry due to corrosion and hydrate formation problems. Experimental data available in the literature on the water content of n-alkanes (C5 and higher) is widely scattered. The perturbed chain form of the SAFT equation of state (PC-SAFT) was used to accurately correlate water mole fraction in n-alkanes, C1 to C16, which are in equilibrium with liquid water or ice. In addition, a list of experimental data is recommended to the reader based on its agreement with the fundamental equation of state used in this dissertation. The proposed molecular model was then applied to predict water content of pure carbon dioxide (CO2), hydrogen sulfide (H2S), nitrous oxide (N2O), nitrogen (N2) and argon (Ar) systems. The theory application was also extended to model water content of acid gas containing mixtures in equilibrium with an aqueous or a hydrate phase. To model accurately the liquid-liquid equilibrium (LLE) at subcritical conditions, cross association between CO2, H2S and water was included. The hydrate phase was modeled using a modified van der Waals and Platteeuw (vdWP) theory. The agreement between the model predictions and experimental data measured in our lab was found to be good across a wide range of temperatures and pressures. Modeling the phase behavior of liquid water can be quite challenging due to the formation of complex hydrogen bonding network structures at low temperatures. However, alcohols share some similarities with water in terms of structure and physical interactions. As a result, studying alcohol + n-alkane binary systems can provide us with a better understanding of water-alkane interactions. Besides, the application of alcohols in the petroleum and the biodiesel industry is of great importance. As a result, Polar PC-SAFT was used to model short chain 1- alcohol + n-alkane mixtures. The ability of the equation of state to predict accurate activity coefficients at infinite dilution was demonstrated as a function of temperature. Investigations show that the association term in SAFT plays an important role in capturing the right composition dependence of the activity coefficients in comparison to excess Gibbs free energy models (UNIQUAC in this case). Results also show that considering long range polar interactions can significantly improve the fractions of free monomers predicted by PC-SAFT in comparison to spectroscopic data and molecular dynamic (MD) simulations. Additionally, evidence of hydrogen bonding cooperativity in 1-alcohol + n-alkane systems is discussed using spectroscopy, simulation and theory. In general, results demonstrate the theory's predictive power, limitations of Wertheim's first order thermodynamic perturbation theory (TPT1) as well as the importance of considering long range polar interactions for better hydrogen bonding thermodynamics. (Abstract shortened by ProQuest.).

      • Essays on the Use of Duality, Robust Empirical Methods, Panel Treatments, and Model Averaging with Applications to Housing Price Index Construction and World Productivity Growth

        Shang, Chenjun ProQuest Dissertations & Theses Rice University 2015 해외박사(DDOD)

        RANK : 235279

        소속기관이 구독 중이 아닌 경우 오후 4시부터 익일 오전 9시까지 원문보기가 가능합니다.

        This dissertation focuses on analyzing the production side of the economy, and aims to provide robust estimates of the parameters of interest. In a production process, the output level is mainly determined by two parts: inputs and productivity. Compared with the inputs, which are concrete and measurable, productivity is an unobservable factor that relies on economic models for estimation. An appropriate and robust modeling method is essential if we want to accurately capture the productivity term. Chapter 1 reviews the research on productivity with a focus on stochastic frontier analysis, which is a classic framework in the productivity literature. This chapter starts with the definition and decomposition of productivity. Measured as a ratio of the outputs to the inputs, productivity can be divided into two main parts: innovations and technical efficiencies. The growth of technologies and innovations depends heavily on education and research, and the technical efficiencies of firms vary with their administration, management skills, and allocation of inputs etc. In studies analyzing these two components, stochastic frontier models have gradually become the standard method. This chapter briefly introduces the development of stochastic frontier models, with an emphasis on the panel data setting. Twelve specifications, as well as their implementation methods, are then discussed in detail. These representative models make different assumptions about the efficiency term, aiming to provide better approximations of the underlying data generating process without adding too many constraints. Comparing all these models, we expect different estimates of productivity from different specifications. The evaluation and selection of a suitable model for empirical analysis become a problem. Standard information criteria provide measures of the performance of each candidate model, but multiple criteria can lead to contradicting conclusions about which model is the best one. In addition, the model selection approach itself ignores the risk of model uncertainty. This issue of dealing with multiple competing models will be addressed in Chapter 3. While Chapter 1 concentrates on the methods of estimating productivity, Chapter 2 focuses on the role of proper specification of the inputs used in generating the output. Though the inputs of a production process are usually observable, their effects on the outputs are often not clear and straightforward. The allocation of different inputs are affected by both the production technology and market prices. Chapter 2 utilizes the duality between the production maximization problem and cost minimization problem to uncover the shadow prices of inputs, and constructs corresponding price indexes for further analysis. This chapter is motivated by recent housing bubbles and considers the housing market for the empirical application. The housing market is an important component of the economy, and constantly attracts interests of researchers. Diewert (2010) for example has provided a comparison of various methods of constructing property price indexes using index number and hedonic regression methods, which he illustrates using data over a number of quarters from a small Dutch town. Chapter 2 provides an alternative approach based on Shephard's dual lemma and I apply it to the same data used by Diewert. This method avoids the multicollinearity problem associated with traditional hedonic regression, and the resulting prices of property characteristics show smoother trends than Diewert's results. The chapter also revisits the Diewert and Shimizu (2013) study that employed hedonic regressions to decompose the price of residential property in Tokyo into land and structure components and that constructed constant quality indexes for land and structure prices respectively. I use three models from Diewert and Shimizu (2013) to fit our real estate data from town 'A' in Netherlands, and also construct the price indices for land and structure, which are compared with results derived using the duality theory. Again, we have multiple models in the study of housing market. As in the case of productivity, the shadow prices of property characteristics are unobservable (due to the nature of the input or intermediate good, there may not exist an explicit market.) Thus, we rely on certain methods for estimation, and there are a set of candidate models. Chapters 1 and 2 leave us in a dilemma. Which model is correct? Which model do we choose? Is any model actually the correct one or are we choosing among misspecified models? Do we simply choose one model and ignore results from the others? These issues are addressed in Chapter 3 wherein a model averaging approach is explored to provide estimates that are robust to various model specifications. Model averaging methods can be used to provide robust estimates by combining a set of competing models through certain optimization mechanisms. (Abstract shortened by ProQuest.).

      • Development and Application of Stochastic Methods for Radiation Belt Simulations

        Zheng, Liheng ProQuest Dissertations & Theses Rice University 2015 해외박사(DDOD)

        RANK : 235279

        소속기관이 구독 중이 아닌 경우 오후 4시부터 익일 오전 9시까지 원문보기가 가능합니다.

        This thesis describes a method for modeling radiation belt electron diffusion, which solves the radiation belt Fokker-Planck equation using its equivalent stochastic differential equations, and presents applications of this method to investigating drift shell splitting effects on radiation belt electron phase space density. The theory of the stochastic differential equation method of solving Fokker-Planck equations is formulated in this thesis, in the context of the radiation belt electron diffusion problem, and is generalized to curvilinear coordinates to enable calculation of the electron phase space density as a function of adiabatic invariants M, K and L. Based on this theory, a three-dimensional radiation belt electron model in adiabatic invariant coordinates, named REM (for Radbelt Electron Model), is constructed and validated against both known results from other methods and spacecraft measurements. Mathematical derivations and the essential numerical algorithms that constitute REM are presented in this thesis. As the only model to date that can solve the fully three-dimensional diffusion problem, REM is used to study the effects of drift shell splitting, which gives rise to M-L and K-L off-diagonal terms in the radiation belt diffusion tensor. REM simulation results suggest that drift shell splitting reduces outer radiation belt electron phase space density enhancements during electron injection events. Plots of the phase space density sources, which are unique products of the stochastic differential equation method, and theoretical analysis further reveal that this reduction effect is caused by a change of the phase space location of the source to smaller L shells, and has a limit corresponding to two-dimensional local diffusion on a curved surface in the (M,K,L) phase space.

      • Cluster Analysis for Big-K Data: Models and Algorithms Based on K-indicators

        Yang, Yuchen ProQuest Dissertations & Theses Rice University 2020 해외박사(DDOD)

        RANK : 235279

        소속기관이 구독 중이 아닌 경우 오후 4시부터 익일 오전 9시까지 원문보기가 가능합니다.

        Cluster analysis is a fundamental unsupervised machine learning strategy with wide-ranging applications. When clustering big data, existing methods of choices increasingly encounter performance bottlenecks that limit solution quality and efficiency. To address such emerging bottlenecks, we propose a new clustering model, called K-indicators, based on a "subspace matching" viewpoint. This non-convex optimization model allows an effective semi-convexification scheme, leading to an essentially deterministic, two-layered alternating projection algorithm called KindAP that requires neither random initialization nor parameter-tuning, while maintaining a complexity linear in the number of data points. We establish global convergence for the inner iterations and an exact recovery result for data sets with tight clusters. Built on the basic K-indicators model, a more advanced model is constructed to perform simultaneous outlier detection and cluster analysis. Under the spectral clustering framework, extensive experimental results on both synthetic datasets and real datasets show that the proposed methods exhibit improved scalability in terms of both solution quality and time compared to K-means and other baseline methods. An open-source software package in Python has been developed and released online that implements the algorithms studied in this thesis.

      • Cooperative Execution of Parallel Tasks with Synchronization Constraints

        Imam, Shams Mahmood ProQuest Dissertations & Theses Rice University 2016 해외박사(DDOD)

        RANK : 235279

        소속기관이 구독 중이 아닌 경우 오후 4시부터 익일 오전 9시까지 원문보기가 가능합니다.

        The topic of this thesis is the effective execution of parallel applications on emerging multicore and manycore systems in the presence of modern synchronization and coordination constraints. Synchronization and coordination can contribute significant productivity and performance overheads to the development and execution of parallel programs. Higher-level programming models, such as the Task Parallel Model and Actor Model, provide abstractions that can be used to simplify writing parallel programs, in contrast to lower-level programming models that directly expose locks, threads and processes. However, these higher-level models often lack efficient support for general synchronization patterns that are necessary for a wide range of applications. Many modern synchronization and coordination constructs in parallel programs can incur significant performance overheads on current runtime systems, or significant productivity overheads when the programmer is forced to complicate their code to mitigate these performance overheads. We believe that a cooperation between the programmer and the runtime system is necessary to reduce the parallel overhead and to execute the available parallelism efficiently in the presence of synchronization constraints. In a cooperative approach, an executing entity yields control to other entities at well-defined points during its execution. This thesis shows that the use of cooperative techniques is critical to performance and scalability of certain parallel programming models, especially in the presence of modern synchronization and coordination constraints such as asynchronous tasks, futures, phasers, data-driven tasks, and actors. In particular, we focus on cooperative extensions and runtimes for the async-finish Task Parallel Model and the Actor Model in this thesis. Our work shows that cooperative techniques simplify programmability and deliver significant performance improvements by reducing the overhead in modern parallel programming models.

      • GPU-Accelerated Discontinuous Galerkin Methods on Hybrid Meshes: Applications in Seismic Imaging

        Wang, Zheng ProQuest Dissertations & Theses Rice University 2017 해외박사(DDOD)

        RANK : 235279

        소속기관이 구독 중이 아닌 경우 오후 4시부터 익일 오전 9시까지 원문보기가 가능합니다.

        Seismic imaging is a geophysical technique assisting in the understanding of subsurface structure on a regional and global scale. With the development of computer technology, computationally intensive seismic algorithms have begun to gain attention in both academia and industry. These algorithms typically produce high-quality subsurface images or models, but require intensive computations for solving wave equations. Achieving high-fidelity wave simulations is challenging: first, numerical wave solutions may suffer from dispersion and dissipation errors in long-distance propagations; second, the efficiency of wave simulators is crucial for many seismic applications. High-order methods have advantages of decreasing numerical errors efficiently and hence are ideal for wave modelings in seismic problems. Various high order wave solvers have been studied for seismic imaging. One of the most popular solvers is the finite difference time domain (FDTD) methods. The strengths of finite difference methods are the computational efficiency and ease of implementation, but the drawback of FDTD is the lack of geometric flexibility. It has been shown that standard finite difference methods suffer from first order numerical errors at sharp media interfaces. In contrast to finite difference methods, discontinuous Galerkin (DG) methods, a class of high-order numerical methods built on unstructured meshes, enjoy geometric flexibility and smaller interface errors. Additionally, DG methods are highly parallelizable and have explicit semi-discrete form, which makes DG suitable for large-scale wave simulations. In this dissertation, the discontinuous Galerkin methods on hybrid meshes are developed and applied to two seismic algorithms---everse time migration (RTM) and full waveform inversion (FWI). This thesis describes in depth the steps taken to develop a forward DG solver for the framework that efficiently exploits the element specific structure of hexahedral, tetrahedral, prismatic and pyramidal elements. In particular, we describe how to exploit the tensor-product property of hexahedral elements, and propose the use of hex-dominant meshes to speed up the computation. The computational efficiency is further realized through a combination of graphics processing unit (GPU) acceleration and multi-rate time stepping. As DG methods are highly parallelizable, we build the DG solver on multiple GPUs with element-specific kernels. Implementation details of memory loading, workload assignment and latency hiding are discussed in the thesis. In addition, we employ a multi-rate time stepping scheme which allows different elements to take different time steps. This thesis applies DG schemes to RTM and FWI to highlight the strengths of the DG methods. For DG-RTM, we adopt the boundary value saving strategy to avoid data movement on GPUs and utilize the memory load in the temporal updating procedure to produce images of higher qualities without a significant extra cost. For DG-FWI, a derivation of the DG-specific adjoint-state method is presented for the fully discretized DG system. Finally, sharp media interfaces are inverted by specifying perturbations of element faces, edges and vertices.

      • Essays on Game Theory and Financial-Strategy Test

        Zhu, Minyan ProQuest Dissertations & Theses Rice University 2015 해외박사(DDOD)

        RANK : 235039

        소속기관이 구독 중이 아닌 경우 오후 4시부터 익일 오전 9시까지 원문보기가 가능합니다.

        Game theory studies strategic decision making among multiple rational players. Since 1950 Nash's famous paper, it has wide applications to many fields: political science, financial market, cooperate finance, industrial organization and etc. Researchers are not only interested in the applications of game theory but also focus on the mechanism design that considers the structure of game forms. In this dissertation, I explore both areas: the first two chapters consider the games played by multiple players in industrial organization and the third chapter considers the mechanism design problem for the assignment problem. Continued government support of public good programs (e.g. assistance to less developed countries, or to university researchers for work on a multistage project, or to communities for environmental improvement programs) often depends on grant recipients making adequate progress toward their goals. Chapter 1 studies a prisoner's dilemma with positive payoffs that will repeat a given known number of times or until there is evidence of cheating, whichever comes first. Our discussion focuses precisely on how much cooperation is possible (i.e., for how many periods cooperation lasts). When the termination rule is based on perfect information about the players' behavior and players are motivated to cooperate for at least one period, early termination of the game never occurs, i.e. cooperation continues until the last possible period. Cooperation may end sooner when the termination rule is based on imperfect information about the players' behavior. For the case of imperfect information, I show how much cooperation can occur as a function of the model parameters and under the assumption that players are able to engage in mutual monitoring. Chapter 2 investigates the motivation of mutual recommendations. It seems irrational for people to refer customers to the other stores without having any profit. But such examples are around us, for example, a mechanical shop may refer customers to another one when it cannot fix the issues. In this chapter, I consider a two-player infinitely repeated game. Players, in each period, can either choose recommendation or not-recommendation that depends on the history of a public signal. A new mechanism, k + 1 punishment scheme, is proposed in which two players stop recommending when k consecutive bag signals occur. Among all possible k + 1 punishment schemes, there exists a unique optimal k* to maximize the player's payoff. Thus, mutual recommendations between players can increase their overall profits even if such action incurs cost. Chapter 3 investigates a typical class of assignment problems, which relaxes the assumption of the completeness of bipartite graphs but enforces balance conditions. When the domain is 2-connectivity (each agent has at most 2 available tasks), I find there exist mechanisms satisfying ordinal-efficiency, equal treatment of equals, and strategy-proof. This result does not restrict the number of players in the game. Since a strong negative result exists in the standard assignment problem, I propose a new mechanism, hybrid mechanism, to find a more relaxed domain to simultaneously satisfy all previous three conditions. The last chapter of my dissertation explores the portfolio management. It compares the results of the decay model with various DCC-GARCH models in risk parity strategy. 16 commodity futures data ranging from 1990.1.1 to 2013.12.31 are implemented to construct portfolio weights. The performance measures are risk attribution, Sharpe Ratio, total return, loss functions and rolling volatilities. I find the decay model and DCC-GARCH model have the similar performances under risk-parity strategy, even if they have different assumptions about the covariance matrix.

      • Characterizing Algorithmic Efficiencies through Concentration

        Pham, Duc Hung ProQuest Dissertations & Theses Rice University 2021 해외박사(DDOD)

        RANK : 235039

        소속기관이 구독 중이 아닌 경우 오후 4시부터 익일 오전 9시까지 원문보기가 가능합니다.

        Understanding inherent bottlenecks to efficient algorithm design lies at the heart of computer science. This question is significant both in the classical computing domain as well as in the emerging context of quantum computing. In this thesis, my goal is to characterize bottlenecks in designing efficient algorithms through the lens of a parameter called concentration of functions, starting with the domain of quantum information. My primary focus is probabilistically approximately correct (PAC) learning. I chose this domain since it allows us to approach the important subject of supervised learning in a rigorous and principled manner. For PAC learning, I propose a quantum algorithm to learn the class of concentrated Boolean functions with complexity O(m/∈2) which offers an advantage over the best known classical PAC algorithms with complexity O(n2 M), where M denotes the number of the concentration terms. I also show a lower bound Ω(M) to PAC learn this class of functions in distribution-independent settings. All this work is done in the context of the standard query model for PAC learning where the complexity measure is the number of queries, dubbed query complexity. I extend this work to include the learning model where functions are learned without any error, which is often called exact learning, and prove a query complexity lower bound of Ω(∈logM/n2n) in exact learning the class of concentrated Boolean functions.In the next part of the thesis, I focus my work on classical algorithms and explore a combinatorial counterpart of concentration called degree of symmetry. In this arena, graph isomorphism is my problem of choice. Once again, my goal is to characterize the efficiency of algorithms, in particular parallel algorithms, for graph isomorphism based on a concentration-related parameter. In particular, I propose a parallel algorithm in polynomial time using a quasi-polynomial number of processors for the Graph Isomorphism problem. My work builds on Babai’s celebrated quasi-polynomial algorithm and is work-preserving. The idea behind the parallelization explores the symmetry of the input structure for easier parallelization.

      • How to Group: From Time Series to Manifold

        Cosentino-Faugere, Romain ProQuest Dissertations & Theses Rice University 2021 해외박사(DDOD)

        RANK : 235039

        소속기관이 구독 중이 아닌 경우 오후 4시부터 익일 오전 9시까지 원문보기가 가능합니다.

        This thesis addresses the problem of data representation for pattern recognition by focusing on three fundamental properties: the efficiency, adaptivity, and interpretability of the representation. The century of progress in harmonic analysis led to the development of theoretically sounded and interpretable tools to decompose, analyze, and process signals. Nevertheless, these tools have now shown their limitation in terms of expressive power and flexibility. The last decade of research in pattern recognition has been revolutionized by the myriad of results that Deep Learning (DL) algorithms provided, helping us to understand better how one can build an efficient data representation. Among the intuitions that DL approaches provided, which most of them are yet to be proven, we will focus on the fact that an efficient representation of the data should be learned jointly with the task at hand. While DL provides the framework and practical tools that enable the efficiency and adaptivity of the representation, it lacks interpretability and theoretical guarantees. By intersecting harmonic analysis and deep learning, the work undertaken in this thesis explores the possibility of providing Deep Harmonic Learning tools, where the interpretability is driven by our profound knowledge of harmonic analysis techniques, and DL techniques drive the flexibility.The first objective is to explore the generalization and learnability of the wavelet transform. Our approach decomposes this task by considering the wavelet transform as two building blocks: a mother wavelet and a group. We first propose to tackle the learnability of the mother wavelet exploiting the efficient representation of the mother wavelet using the Hermite cubic spline as a basis. This approach, both efficient and learnable, is used to replace the first layer of Deep Neural Networks (DNNs) and proved its performance on a large-scale pattern recognition task. Then, we consider the learnability of the group by which the mother wavelet is transformed to produce the filter bank. This approach allows for the learnability of intricate correlations that are often aligned with the symmetry of the data. Again, the replacement of the first layer of DNNs by these adaptive filters provides state-of-the-art results on various datasets.The second objective of this thesis is to explore the approximation and quantization of manifolds by exploiting the assumption that the data manifold is governed by a symmetry group. Our approach is two-fold: firstly, we provide a quantizer of the image manifold that is based on the learnability of the non-rigid transformations governing the images. In particular, we build a metric aware of these intricate transformations, and that can adapt to the data at hand. The challenge of learning in an unsupervised fashion appropriate invariance is tackled by exploiting the intuitive parameterization that the Thin-Plate-Spline interpolation method offers. The resulting shallow clustering algorithm is fully interpretable and achieves performances comparable to its deep learning counterpart. Secondly, we provide a new approach to perform manifold approximation with generalization guarantees. This is achieved by exploiting the piece-wise continuous approximation property of autoencoders which can be constrained to be equivariant to a group of transformations. Again, we consider the learnability of the group underlying the data to steer the equivariance. The equivariant autoencoder we propose achieves state-of-the-art results on a large number of datasets.

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼