Current


  • Simba: Spatial In-Memory Big data Analytics: Simba is a distributed in-memory spatial analytics engine based on Apache Spark. It extends the Spark SQL engine across the system stack to support rich spatial queries and analytics through both SQL and DataFrame query interfaces. Besides, Simba introduces native indexing support over RDDs in order to develop efficient spatial operators. It also extends Spark SQL's query optimizer with spatial-aware and cost-based optimizations to make the best use of existing indexes and statistics.



  • Medium: Collaborative: Seal: Secure Engine for AnaLytics - From Secure Similarity Search to Secure Data Analytics: Many organizations and individuals rely on the cloud to store their data and process their analytical queries. But such data may contain sensitive information. Not only do users want to conceal their data on a cloud, they may also want to hide analytical queries over their data, results of such queries, and data access patterns from a cloud service provider (that may be compromised either from within or by a third party).



  • CIF21 DIBBs: STORM: Spatio-Temporal Online Reasoning and Management of Large Data: A fundamental challenge for many research projects is the ability to handle large quantities of heterogeneous data. Data collected from different sources and time periods can be inconsistent, or stored in different formats and data management systems. Thus, a critical step in many projects is to develop a customized query and analytical engine to translate inputs. But for each new dataset, or for each new query type or analytic task for an existing dataset, a new query interface or program must be developed, requiring significant investments of time and effort. This project will develop an automatic engine for searching large, heterogeneous data collections for weather and meteorology, particularly from instruments in the western US, in a regional network called MesoWest.This project develops an automatic query and analytical engine for large, heterogeneous spatial and temporal data. This capability allows users to automatically deploy a query and analytical engine instance over their large, heterogeneous data with spatial and temporal dimensions. The system supports a simple search-box and map-like query interface that allows numerous powerful analytical queries. Techniques to make these queries robust, relevant, and highly scalable will be developed. The project also enables users to execute queries over multiple data sources simultaneously and seamlessly. The goal of the work is to dramatically simplify the management and analysis of large spatio-temporal data at different institutions, groups, and corporations.



  • BIGDATA: Small: DCM: DA: Building a Mergeable and Interactive Distributed Data Layer for Big Data Summarization Systems: Big data today is stored in a distributed fashion across many different machines or data sources. This poses new algorithmic and system challenges to performing efficient analysis on the full data set. To address these difficulties, the PIs are building the MIDDLE (Mergeable and Interactive Distributed Data LayEr) Summarization System and deploying it on large real-world datasets. The MIDDLE system builds and maintains a special class of summaries that can be efficiently constructed and updated while still allowing fine-grained analysis on the heavy tail. Mergeable summaries can represent any data set with a guaranteed tradeoff between size and accuracy, and any two such summaries can be merged to create a new summary with the same size-accuracy tradeoff.



  • TWC: Medium: TCloud: A Self-Defending, Self-Evolving and Self-Accounting Trustworthy Cloud Platform: The use of cloud computing has revolutionized the way in which cyber infrastructure is used and managed. The on-demand access to seemingly infinite resources provided by this paradigm has enabled technical innovation and indeed innovative business models and practices. This rosy picture is threatened, however, by increasing nefarious interest in cloud platforms. Specifically, the shared tenant, shared resource nature of cloud platforms, as well as the natural accrual of valuable information in cloud platforms, provide both the incentive and the possible means of exploitation.To address these concerns we are developing a self-defending, self-evolving, and self-accounting trustworthy cloud platform, the TCloud. Our approach in realizing TCloud holds to the following five tenets. First, defense-in-depth through innate containment, separation and diversification at the architectural level. Second, least authority by clear separation of functionality and associated privilege within the architecture. Third, explicit orchestration of security functions based on cloud-derived and external intelligence. Fourth, moving-target-defense through deception and dynamic evolution of the platform. Fifth, verifiable accountability through light weight validation and auditable monitoring, record keeping and analysis.Our approach to fundamentally refactor the cloud architecture to explicitly enable security related functionality lays the foundation for truly trustworthy cloud computing. Given the unrelenting push towards the use of cloud technologies our work has broad applicability across industry, healthcare, government and academia. All software we develop will be released to the community in open source form.



  • CSR: Medium: Energy-Efficient Architectures for Emerging Big-Data Workloads: In modern server architectures, the processor socket and the memory system are implemented as separate modules. Data exchange between these modules is expensive -- it is slow, it consumes a large amount of energy, and there are long wait times for narrow data links. Emerging big-data workloads will require especially large amounts of data movement between the processor and memory. To reduce the cost of data movement for big-data workloads, the project attempts to design new server architectures that can leverage 3D stacking technology. The proposed approach, referred to as Near Data Computing (NDC), reduces the distance between a subset of computational units and a subset of memory, and can yield high efficiency for workloads that exhibit locality. The project will also develop new big-data algorithms and runtime systems that can exploit the properties of the new architectures.

    The project will lead to technologies that can boost performance and reduce the energy demands of big-data workloads. Several reports have cited the importance of these workloads to national, industrial, and scientific computing infrastructures. The project outcomes will be integrated into University of Utah curricula and will play a significant role in a new degree program on datacenter design and operation. The PIs will broaden their impact by publicly distributing parts of their software infrastructure and by engaging in outreach programs that involve minorities and K-12 students.



  • Time-Sensitive and Keyword-Aware Recommendation in Large Data: TBA



  • Scholarships for Service at Florida State University: Focused on Information Assurance education, the Scholarship For Service program gives students scholarship funds in exchange for service in the federal government for a period equivalent to the length of their scholarship, typically two years. As a result of the SFS program, federal agencies are able to select from a highly qualified pool of student applicants for internships and permanent positions, and students get a guarantreed job. The SFS program is offered by the National Science Foundation (NSF) and co-sponsored by the Department of Homeland Security (DHS).



  • COFRS Award: Efficient Aggregate Similarity Search: For many geospatial and multimedia database applications, a fundamental challenge is to answer similaritysearch on large amounts of data, also known as the nearest neighbor (NN) query. Despite extensivestudies, little is known for aggregate similarity search that has witnessed an increasing number of applications.Given a data set P, an aggregate similarity query is specified by a group of query objects Q, anaggregator ? and a similarity function f involving at least the spatial distance between objects. The goalis to find an object p? ? P such that the aggregate similarity between p? and objects from Q is minimized.For example, when ? is sum over spatial distances, it is to find p? ? P that minimizes the sum of distancesbetween p? and every object from Q.The aggregate similarity search presents interesting and challenging research problems, in terms of bothquery semantics and query processing techniques. We will also explore parallel and relational (those thatcan be directly implemented by standard SQL statements) methods in this project.Due to the fundamental importance of the similarity search and the emerging applications for variousaggregate similarity search, our project will significantly improve the scientific work in geospatial intelligenceanalytics, including spatial intelligence, complex spatial data analysis, GIS, and location-based services.End-users may design customized aggregate similarity search to identify normal and discover anomalies incomplex geospatial and multimedia data.



  • Non-Conventional Search and Identification of Delinquent Parent: TBA



  • CAREER: Novel Query Processing Techniques for Probabilistic Data: Data are increasingly generated, stored, and processed distributively. Meanwhile, when large amounts of data are generated, ambiguity, uncertainty, and errors are inherently introduced, especially in a distributed setup. It is best to represent such data in a distributed probabilistic database. In distributed data management, summary queries are useful tools for obtaining the most important answers from massive quantities of data effectively and efficiently, e.g., top-k queries, heavy hitters (aka frequent items), histograms and wavelets, threshold monitoring queries, etc. This project investigates novel query processing techniques for various, important summary queries in distributed probabilistic data. Broadly classified, this project examines both snapshot summary queries in static (i.e., no updates) distributed probabilistic databases, and continuous summary queries in dynamic (i.e., with updates) distributed probabilistic databases. A number of techniques are explored to design novel, communication and computation efficient algorithms for processing these queries. A distributed probabilistic data management system (DPDMS) prototype is implemented based on the query processing techniques developed in this project. This DPDMS is released to and used in practice by scientists and engineers from other science disciplines as well as industry. Graduate and undergraduate students, including those from minority groups, are actively involved in this project. Findings from the project have been integrated into different courses, demos, and educational projects.



  • Efficient Ranking and Aggregate Query Processing for Probabilistic Data: When dealing with massive quantities of data, ranking and aggregate queries are powerful techniques for focusing attention on the most important answers. Many applications that produce such massive quantities of data inherently introduce uncertainty in the same time, for example, probabilistic match in data integration, imprecise measurements from sensors, fuzzy duplicates in data cleaning, inconsistency in scientific data. Hence, the importance of these queries is even greater in probabilistic data, where a relation can encode exponentially many possible worlds. Uncertainty opens the gate to many possible definitions for ranking and aggregate queries. With the wide presence of probabilistic data, processing ranking and aggregate queries efficiently with the right semantics is of key importance for the successful deployment of probabilistic databases.



  • Towards Trustworthy Database Systems: Answers to database queries often form the basis for critical decision-making. To improve efficiency and reliability, answers to these queries can be provided by distributed servers close to the querying clients. However, because of the servers' ubiquity, the logistics associated with fully securing them may be prohibitive; moreover, when the servers are run by third parties, the clients may not trust them as much as they trust the original data owners. Thus, the authenticity of the answers provided by servers in response to clients' queries must be verifiable by the clients. More generally, database responses are more useful if they contain the evidence of their own correctness. For example, this enables a consumer to provide her own credit report to a creditor without having the creditor request it from the reporting agency to establish the validity of the report. This project is developing methods for authenticating the validity and authenticity of a variety of database queries, including general relational, data cube, and spatio-temporal queries. Furthermore, techniques that use powerful cryptographic primitives are being developed for providing authentication and confidentiality. This research will enable utilization of this infrastructure in applications where users must rely on the authenticity of the answer, such as in financial systems, network monitoring, traffic control, or applications yet to be imagined. The results of this project will be disseminated through publications in journals and conferences. Furthermore, source code of these methods, in the form of libraries, will be made available over the web. This is a collaborative project with the Datbase Lab at Boston University, with Prof. George Kollios and Prof. Leonid Reyzin.



  • Past


  • Query Verification in Database Systems: Completed.



  • Building Trustworthy Database Systems: Completed.