AI RESEARCH PAPERS & ACADEMIC SOURCES
- Towards Understanding Gradient Flow Dynamics of Homogeneous Neural Networks Beyond the Origin
- Optimal Complexity in Byzantine-Robust Distributed Stochastic Optimization with Data Heterogeneity
- Towards Unified Native Spaces in Kernel Methods
- TorchCP: A Python Library for Conformal Prediction
- Hopfield-Fenchel-Young Networks: A Unified Framework for Associative Memory Retrieval
- Identifiability of Causal Graphs under Non-Additive Conditionally Parametric Causal Models
- Fundamental Limits of Membership Inference Attacks on Machine Learning Models
- On the Robustness of Kernel Goodness-of-Fit Tests
- Efficient Online Prediction for High-Dimensional Time Series via Joint Tensor Tucker Decomposition
- Fast Computation of Superquantile-Constrained Optimization Through Implicit Scenario Reduction
- Collaborative likelihood-ratio estimation over graphs
- On the Utility of Equal Batch Sizes for Inference in Stochastic Gradient Descent
- Differentially Private Bootstrap: New Privacy Analysis and Inference Strategies
- Convergence and Sample Complexity of Natural Policy Gradient Primal-Dual Methods for Constrained MDPs
- Differentially Private Multivariate Medians
- VFOSA: Variance-Reduced Fast Operator Splitting Algorithms for Generalized Equations
- Scaling Capability in Token Space: An Analysis of Large Vision Language Model
- Minimax Optimal Two-Sample Testing under Local Differential Privacy
- Jackpot: Approximating Uncertainty Domains with Adversarial Manifolds
- An Asymptotically Optimal Coordinate Descent Algorithm for Learning Bayesian Networks from Gaussian Models
- Convergence Rates for Non-Log-Concave Sampling and Log-Partition Estimation
- A Unified Framework to Enforce, Discover, and Promote Symmetry in Machine Learning
- Infinite-dimensional Mahalanobis Distance with Applications to Kernelized Novelty Detection
- Stable learning using spiking neural networks equipped with affine encoders and decoders
- Efficient Knowledge Deletion from Trained Models Through Layer-wise Partial Machine Unlearning
- General Loss Functions Lead to (Approximate) Interpolation in High Dimensions
- Piecewise deterministic sampling with splitting schemes
- Hierarchical and Stochastic Crystallization Learning: Geometrically Leveraged Nonparametric Regression with Delaunay Triangulation
- Gold-medalist Performance in Solving Olympiad Geometry with AlphaGeometry2
- Decentralized Bilevel Optimization: A Perspective from Transient Iteration Complexity
- Fair Text Classification via Transferable Representations
- Stochastic Interior-Point Methods for Smooth Conic Optimization with Applications
- Revisiting Gradient Normalization and Clipping for Nonconvex SGD under Heavy-Tailed Noise: Necessity, Sufficiency, and Acceleration
- Generalized multi-view model: Adaptive density estimation under low-rank constraints
- (De)-regularized Maximum Mean Discrepancy Gradient Flow
- On Probabilistic Embeddings in Optimal Dimension Reduction
- Physics Informed Kolmogorov-Arnold Neural Networks for Dynamical Analysis via Efficient-KAN and WAV-KAN
- Graph-accelerated Markov Chain Monte Carlo using Approximate Samples
- Online Quantile Regression
- Statistical Inference of Random Graphs With a Surrogate Likelihood Function
- On the Representation of Pairwise Causal Background Knowledge and Its Applications in Causal Inference
- An Augmentation Overlap Theory of Contrastive Learning
- Algorithms for ridge estimation with convergence guarantees
- Talent: A Tabular Analytics and Learning Toolbox
- Inferring Change Points in High-Dimensional Regression via Approximate Message Passing
- Universality of Kernel Random Matrices and Kernel Regression in the Quadratic Regime
- Lexicographic Lipschitz Bandits: New Algorithms and a Lower Bound
- On the Natural Gradient of the Evidence Lower Bound
- Geometry and Stability of Supervised Learning Problems
- Understanding Deep Representation Learning via Layerwise Feature Compression and Discrimination
- Optimal Rates of Kernel Ridge Regression under Source Condition in Large Dimensions
- A Hybrid Weighted Nearest Neighbour Classifier for Semi-Supervised Learning
- Scalable and Adaptive Variational Bayes Methods for Hawkes Processes
- Biological Sequence Kernels with Guaranteed Flexibility
- Unified Discrete Diffusion for Categorical Data
- Reinforcement Learning for Infinite-Dimensional Systems
- Deep Neural Networks are Adaptive to Function Regularity and Data Distribution in Approximation and Estimation
- Generation of Geodesics with Actor-Critic Reinforcement Learning to Predict Midpoints
- Learning-to-Optimize with PAC-Bayesian Guarantees: Theoretical Considerations and Practical Implementation
- Sparse Semiparametric Discriminant Analysis for High-dimensional Zero-inflated Data
- Stochastic Interpolants: A Unifying Framework for Flows and Diffusions
- Efficient Methods for Non-stationary Online Learning
- Decentralized Asynchronous Optimization with DADAO allows Decoupling and Acceleration
- Mixtures of Gaussian Process Experts with SMC^2
- Robust Point Matching with Distance Profiles
- BoFire: Bayesian Optimization Framework Intended for Real Experiments
- Reliever: Relieving the Burden of Costly Model Fits for Changepoint Detection
- Variational Inference for Uncertainty Quantification: an Analysis of Trade-offs
- Are Ensembles Getting Better All the Time?
- An Adaptive Parameter-free and Projection-free Restarting Level Set Method for Constrained Convex Optimization Under the Error Bound Condition
- Operator Learning for Hyperbolic PDEs
- Optimal subsampling for high-dimensional partially linear models via machine learning methods
- Decentralized Sparse Linear Regression via Gradient-Tracking
- Calibrated Inference: Statistical Inference that Accounts for Both Sampling Uncertainty and Distributional Uncertainty
- Relaxed Gaussian Process Interpolation: a Goal-Oriented Approach to Bayesian Optimization
Research Sources: 75 | Generated: 1/2/2026
