推荐杏吧原创

Research projects

Check back soon for details on specific projects.

Phasor Measurement Unit (PMU) Data Analytics based Smart Grid Diagnostics

Mentors: Hanif Livani and Lei Yang

Project Description: With the proliferation of PMUs in smart grids, time-synchronized high-resolution measurements can be obtained and used for numerous monitoring applications such as state estimation and event diagnostics. Disruptive events frequently occur in smart grids that interrupt the normal operation of the system. Therefore, data-driven event diagnostics are of utmost importance to extract useful information such as the cause or location of events. Moreover, having a repository of data is useful for other post-event analysis, such as preventive maintenance. Accurate disruptive event analysis is beneficial in terms of time, maintenance crew utilization, and further outages prevention. In this research project, a PMU data-driven framework will be developed to distinguish disruptive events, i.e., malfunctioned capacitor bank switching and malfunctioned regulator on-load tap changer (OLTC) switching from normal operating events, namely, the normal abrupt load change and the reconfiguration in smart grids. The event diagnostics will be formulated using a neural network based algorithm, i.e., autoencoders along with softmax classifiers. The performance of the proposed framework will be verified using our state-of-the-art Cyber-Physical-Hardware-in-the-Loop (CP-HIL) testbed. This project will broaden students' perspective in smart grids by utilizing advanced data analytics and hands-on education.

Student Role: The undergraduate students will get an introduction to Matlab so that they can learn some basic data analytics, experiment design, and basic machine learning library usage. With the guidance from the mentors and PhD students, they will learn how to extract PMU data stream from actual devices in the CP-HIL testbed, write computer codes with graphical user interface (GUI) to access data from the SQL server, and execute data-driven event diagnostics tools.

Big Data Analytics based Wildfire Smoke Transport and Air Quality Prediction

Mentors: Feng Yan, Lei Yang, and Heather Holmes

Project Description: Smoke can transport very fast and cause sudden air quality change and cause significant health and economic problems. State-of-the-art smoke forecasting models can only do infrequent updates (e.g., every 6 hours) and predict with very limited spatial resolution (e.g., 12km$\times$12km) due to the low spatiotemporal data resolution. To enable real-time prediction of wildfire smoke transport and air quality, data with finer temporal and spatial resolution is needed. The ground-level camera systems (e.g., AlertTahoe Fire Camera Network) generate large amounts of image data at various locations with much finer spatial and temporal resolution, e.g., each camera can generate 30 images per second for a small region (e.g., 1km x 1km). By using these images, we can first detect the smoke in each region, then estimate the air quality based on the strong correlation between air pollution concentrations and smoke plume density, and then predict air quality from smoke transport.

Student Role: With the guidance from the mentors and PhD students, the undergraduate students will learn how to detect smoke from image data using Deep Neural Networks (DNN). They will also learn how to predict air quality from smoke transport using Gaussian Markov Random Field (GMRF) based on the strong correlation between air quality and smoke transport.

Big Data Analytics based Robotic Perception for Autonomous Driving

Mentors: Kostas Alexis and Lei Yang

Project Description: Autonomous driving requires an accurate and comprehensive understanding of the vehicle's surroundings. This refers to a multitude of challenges including those of a) detection and classification of objects of interest and b) estimating the relative pose of such objects. State-of-the-art methods face certain limitations. Object detection tasks for traffic sign recognition are well--handled but the respective methods lack in their ability to deal with visually--degraded environments or significant occlusions. Object localization works well when the tracked object is consistently perceived but can fail otherwise due to lack of reliable prediction methods. To achieve reliable autonomous driving, ``any--time'' and ``any--place'' robust and safe navigation autonomy must be facilitated. In this project, we have identified three important challenges of progressive complexity and we will offer respective research experiences for students.

Student Role: The undergraduate students will conduct experiments over a pre-trained neural network and will perform detection of traffic signs in both well-lit and low-light conditions with the guidance from the mentors and PhD students. After they get familiar with machine learning and TensorFlow, they will dive deeper into multi-view geometry and will aim to optimize the recognition behavior of a pre-trained network exploiting a sliding window trajectory of the vehicle.

Adaptive and Scalable Big Data Management

Mentors: Lei Yang and Dongfang Zhao

Project Description: In many big data applications, massive amounts of heterogeneous data are collected by various sensing devices, in order to enhance the cognition of the system dynamics and optimize decision making. However, these measurements are subject to communication delay and data packet loss, which can lead to significant errors in system state estimation and prediction. A key observation is that these measurements are spatio-temporal correlated, which can be represented in a low-rank subspace. By constructing and tracking this low-rank subspace, it is possible to reconstruct the delayed or missing data. However, such a low-rank subspace is difficult to characterize, as the measurements are heterogeneous with complex spatio-temporal correlations and such correlations may change over time due to the change of network topology.

Student Role: The undergraduate students will get an introduction to basic data recovery techniques and apply these techniques to recover the missing data. With the guidance from the mentors and PhD students, they will study state-of-the-art tensor-based data recovery techniques and develop adaptive and scalable data recovery methods for various big data applications.

Big Data System Performance and Efficiency Optimization

Mentors: Feng Yan and Dongfang Zhao

Project Description: Big data analytics tasks in many applications (e.g., recognition, prediction, and control for smart cities) are fulfilled in large-scale distributed systems (e.g., Hadoop, Spark, and Storm, Tensorflow, and Caffe. The performance of these big data systems depends on the configuration optimization for different applications, workloads, and systems. However, today's computing frameworks provide tens to hundreds of configuration knobs for users to tune their systems, which renders a challenging task for many users with no expertise in either application domain or system. Even for experts in both application domain and system, it is time-consuming to optimally configure the system. Therefore, there is an urgent need to develop formal methodologies for automatic configuration tuning to optimize big data system performance and efficiency.

Student Role: The undergraduate students will get an introduction to big data analytics systems, such as Hadoop, Spark, and Storm. With the guidance from the mentors and PhD students, they will get familiar with the configuration knobs and learn some basic skills in configuring the big data analytics systems. Then they will learn how to collect performance measurements from the big data analytics systems and analyze the collected performance measurements. Finally, they will learn to use analytical models, simulation, and machine learning techniques to automatically tune the control knobs to get optimized system performance and efficiency.