Our research focuses on the development of artificial intelligence (AI) systems — which include machine learning, deep learning, computer vision and other techniques — to solve cutting edge problems in a variety of application areas (shown below). This sometimes involves the development of novel AI approaches, as well as adapting existing approaches to solve new problems. Some of our active research areas are described below. For a longer list of publications see my Google Scholar profile (link)
Estimating Emissions on a Global Scale
This project is supported by the ClimateTrace foundation, a non-profit organization committed to tracking global greenhouse gas emissions, and supported by Al Gore, the former Vice President of the USA (photo below from this past year’s annual research meeting).
Our goal is to estimate building emissions data (i.e., pollution) at a 1-kilometer-by-1-kilometer spatial resolution, across the entire globe. This is a much higher spatial resolution (~100x) compared to the current highest spatial resolution of emissions estimates in this sector, which is about 10-by-10-kilometer grid cells. To accomplish this goal we are developing novel models that utilize various sources of information (e.g., economic indicators, building footprints derived from satellite imagery) to predict the probable emissions within each region, as illustrated in the figure below. Our work is done in collaboration with the Energy Data Analytics Lab at Duke University (link). You can read more about our recent work here (link).
Designing Advanced Materials
Metamaterials (MMs) are a widely-studied type of material whose properties depend primarily upon its geometric structure, rather than its chemical properties. MMs are exciting because they have been shown capable of attaining exotic properties that cannot be realized by conventional materials if the property geometric design can be obtained. In principle, increasingly powerful properties are attainable with MMs if sufficiently-complex geometric configurations can be identified. However, as the complexity of the MM grows, it is more difficult to predict the properties of the MM based upon its geometric design: the so-called “forward modeling problem”. In recent research we have investigated the use of Deep Neural Networks to predict the properties of MMs based upon their geometry, as shown below in (1). We have also developed specialized deep neural networks to solve the so-called “inverse modeling” problem, wherein we wish to predict the geometric design that would yield some desired material properties, as shown in (2).
Recent Publications:
Lu, D., Deng, Y., Malof, J.M. and Padilla, W.J., 2024. Can Large Language Models Learn the Physics of Metamaterials? An Empirical Study with ChatGPT. arXiv preprint arXiv:2404.15458.
Spell, G.P., Ren, S., Collins, L.M. and Malof, J.M., 2023, June. Mixture manifold networks: a computationally efficient baseline for inverse modeling. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 37, No. 8, pp. 9874-9881).
Ren, S., Mahendra, A., Khatib, O., Deng, Y., Padilla, W.J. and Malof, J.M., 2022. Inverse deep learning methods and benchmarks for artificial electromagnetic material design. Nanoscale, 14(10), pp.3958-3969.
Extraction of Actionable Information From Satellite Imagery
Remotely-sensed data (e.g., satellite imagery) potentially offers a rich sources of information about the natural environment, and human activities. However, this information is often inaccessible because remotely-sensed data is extremely large and information-diffuse, making it costly or impossible for human analysts to manually inspect such data at scale. We can use AI models, such as deep neural networks to automatically inspect massive volumes of remotely-sensed data for useful information. The figure below illustrates past work where we have developed competition-winning methods for automatically identifying building “footprints” in satellite imagery.
Recent Publications
Ren, S., Luzi, F., Lahrichi, S., Kassaw, K., Collins, L.M., Bradbury, K. and Malof, J.M., 2024. Segment anything, from space?. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (pp. 8355-8365).
Luzi, F., Gupta, A., Collins, L., Bradbury, K. and Malof, J., 2023. Transformers for recognition in overhead imagery: A reality check. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (pp. 3778-3787).
Kong, F., Huang, B., Bradbury, K. and Malof, J., 2020. The Synthinel-1 dataset: A collection of high resolution synthetic overhead imagery for building segmentation. In Proceedings of the IEEE/CVF winter conference on applications of computer vision (pp. 1814-1823).