Projects
I am currently involved in a few projects—some personal and others in collaboration with friends. Below is an illustrative list of current and past projects:
1. Full Waveform Inversion as a Gradient Flow Problem
Full Waveform Inversion (FWI) is an advanced inverse method used widely in geophysics to determine subsurface properties. The procedure involves iteratively solving a forward problem coupled with an inverse problem, hence computationally expensive using traditional numerical methods which are slow but well-understood.
Machine learning (ML) provide a promising alternative: once trained, models can deliver near-instant predictions on new seismic data without repeatedly solving the forward problem. Unfortunately, most current ML models for FWI are purely data-driven (black boxes), with limited guarantees for accuracy, consistency of the model. This is even worse when the data is out-of-distribution. This lack of reliability is a serious barrier to deployment in real-world settings, as errors can have multi-million-dollar consequences. Even the top model (a Vision Transformer) in a recent Kaggle FWI competition, was still purely data-driven.
To address this, it is imperative to build ML models that respect the governing mathematics and physics, and are supported by extensive error analysis. For this, I decided to recast FWI as a gradient-flow problem and design a ML model that respect energy dissipation.
The idea is to use an encoder-decoder approach (not in the usual sense of latent space), to solve the forward and inverse problem jointly. Further details are provided in this research statement which I drafted (view here) and initial results of the project can be found here.
Due to limited computational resources, this project is currently paused. I sincerely thank KFUPM JRCAI for granting me temporary GPU access, which enabled me to conceptualize and initiate this work.2. Hands-on Scientific Machine Learning eBook
SciML is a fast-evolving field that sits at the intersection of scientific computing and machine learning. For newcomers, it can be quite overwhelming: the learning curve is steep, the literature is vast (worse of all, many are paywalled while some cryptic), and the scientific foundations are not always easy to piece together.
Initially, I began documenting my own learning journey and implementations to make sense of the field. Around that time, I was fortunate to discuss with Xiaoyu Xie, who was independently working on an interactive eBook. The idea of an interactive eBook was even cooler and we decided to collaborate. While my intent was to give back to the community, the past few months have been incredibly rewarding for me personally. I have ventured into Neural ODEs, Neural Fields, Autoencoders and VAEs, Graph Neural Networks, and even Transformers. I have also come across truly fascinating researches along the way.
Though we are still in the early stages, this project is something I am excited about and cannot wait to see how it evolves after it's launch.
3. ML for Phase-Field Modeling
In collaboration with Wahab Abdul, a PhD student in KFUPM working on Moving Interface Problems. We aim to build a surrogate model that augments numerical methods for Phase-Field modeling. Since numerical methods with very small time steps are very slow and computationally expensive, the idea is to obtain some data by solving the problem using a well-known numerical method (e.g Spectral methods) at large time steps, and then a ML approach for continuous-time dynamics. Given our dataset, Fourier Neural Operator (FNO) gave the best approximation (probably because the data was obtained using Fourier Spectral Method). Unfortunately, the vanilla FNO is autoregressive, therefore, we are exploring how to induce a Neural ODE into an FNO, by encoding into a suitable latent space to reduce computational cost.
It is amusing to see that truncating the higher frequency modes in a FNO is infact a latent representation. Thus, the main challenge is to construct a Neural ODE that evolves directly in the truncated Fourier (latent) space such that it becomes an evolution equation in spectral space.
4. Physics-Informed Neural Networks for Quantum Eigenvalue Problems
For my MS thesis, I worked on using PINNs to obtain family of solutions. Thus, I focused on obtaining eigenpairs of the time-independent Schrödinger equation, across different potentials. Instead of training separate networks for eigenvalues and eigenfunctions (Figs. 3–4), a single network represents the eigenfunction, and the corresponding eigenvalue is obtained by evaluating Rayleigh quotient on the network output. This accelerated convergence relative to the two‑network baseline. To obtain higher energy levels, I considered two strategies: (i) a deflation approach that learns eigenpairs sequentially (Fig. 1), and (ii) a self‑attention–based approach that returns arbitrary number of eigenpairs in a single forward pass (Fig. 2). Symmetry was also enforced so the model is equivariant.

