Kobayashi, T.[Takumi],
Watanabe, K.[Kenji],
Otsu, N.[Nobuyuki],
Logistic label propagation,
PRL(33), No. 5, 1 April 2012, pp. 580-588.
Elsevier DOI
1202
Semi-supervised learning; Logistic function; Label propagation;
Similarity; Gradient descent
BibRef
Baidoo-Williams, H.E.,
Dasgupta, S.,
Mudumbai, R.,
Bai, E.[Erwei],
On the Gradient Descent Localization of Radioactive Sources,
SPLetters(20), No. 11, 2013, pp. 1046-1049.
IEEE DOI
1310
gradient methods
BibRef
Chen, J.,
Liu, Y.,
Data-Time Tradeoffs for Corrupted Sensing,
SPLetters(25), No. 7, July 2018, pp. 941-945.
IEEE DOI
1807
Gaussian processes, gradient methods, signal reconstruction,
PGD method, corrupted sensing problems, data-time tradeoff,
projected gradient descent (PGD)
BibRef
Bottarelli, L.[Lorenzo],
Loog, M.[Marco],
Gaussian process variance reduction by location selection,
PRL(125), 2019, pp. 727-734.
Elsevier DOI
1909
BibRef
Earlier:
Gradient Descent for Gaussian Processes Variance Reduction,
SSSPR18(160-169).
Springer DOI
1810
Gaussian process, Variance reduction, Gradient descent, Sampling
BibRef
Li, P.L.[Pei-Lin],
Lee, S.H.[Sang-Heon],
Park, J.S.[Jae-Sam],
Development of a global batch clustering with gradient descent and
initial parameters in colour image classification,
IET-IPR(13), No. 1, January 2019, pp. 161-174.
DOI Link
1812
BibRef
Cheng, C.,
Emirov, N.,
Sun, Q.,
Preconditioned Gradient Descent Algorithm for Inverse Filtering on
Spatially Distributed Networks,
SPLetters(27), 2020, pp. 1834-1838.
IEEE DOI
2011
Signal processing algorithms, Approximation algorithms,
Iterative methods, Data processing, Symmetric matrices,
quasi-Newton method
BibRef
Qu, Q.[Qing],
Li, X.[Xiao],
Zhu, Z.H.[Zhi-Hui],
Exact Recovery of Multichannel Sparse Blind Deconvolution via
Gradient Descent,
SIIMS(13), No. 3, 2020, pp. 1630-1652.
DOI Link
2010
BibRef
Benning, M.[Martin],
Betcke, M.M.[Marta M.],
Ehrhardt, M.J.[Matthias J.],
Schönlieb, C.B.[Carola-Bibiane],
Choose Your Path Wisely:
Gradient Descent in a Bregman Distance Framework,
SIIMS(14), No. 2, 2021, pp. 814-843.
DOI Link
2107
BibRef
Sun, T.,
Qiao, L.,
Liao, Q.,
Li, D.,
Novel Convergence Results of Adaptive Stochastic Gradient Descents,
IP(30), 2021, pp. 1044-1056.
IEEE DOI
2012
Convergence, Training, Optimization, Task analysis,
Stochastic processes, Adaptive systems, Sun,
nonergodic convergence
BibRef
Tirer, T.[Tom],
Giryes, R.[Raja],
On the Convergence Rate of Projected Gradient Descent for a
Back-Projection Based Objective,
SIIMS(14), No. 4, 2021, pp. 1504-1531.
DOI Link
2112
BibRef
Lei, Y.[Yunwen],
Tang, K.[Ke],
Learning Rates for Stochastic Gradient Descent With Nonconvex
Objectives,
PAMI(43), No. 12, December 2021, pp. 4505-4511.
IEEE DOI
2112
Complexity theory, Training data, Convergence, Statistics,
Behavioral sciences, Computational modeling, early stopping
BibRef
Xu, J.[Jie],
Zhang, W.[Wei],
Wang, F.[Fei],
A(DP)^2SGD: Asynchronous Decentralized Parallel Stochastic Gradient
Descent With Differential Privacy,
PAMI(44), No. 11, November 2022, pp. 8036-8047.
IEEE DOI
2210
Differential privacy, Computational modeling, Servers, Training,
Privacy, Stochastic processes, Data models, Distributed learning,
differential privacy
BibRef
Guo, S.W.[Shang-Wei],
Zhang, T.W.[Tian-Wei],
Yu, H.[Han],
Xie, X.F.[Xiao-Fei],
Ma, L.[Lei],
Xiang, T.[Tao],
Liu, Y.[Yang],
Byzantine-Resilient Decentralized Stochastic Gradient Descent,
CirSysVideo(32), No. 6, June 2022, pp. 4096-4106.
IEEE DOI
2206
Training, Servers, Learning systems, Distance learning,
Computer aided instruction, Security, Fault tolerant systems,
Byzantine fault tolerance
BibRef
Wang, B.[Bao],
Nguyen, T.[Tan],
Sun, T.[Tao],
Bertozzi, A.L.[Andrea L.],
Baraniuk, R.G.[Richard G.],
Osher, S.J.[Stanley J.],
Scheduled Restart Momentum for Accelerated Stochastic Gradient
Descent,
SIIMS(15), No. 2, 2022, pp. 738-761.
DOI Link
2206
BibRef
Lv, X.[Xiao],
Cui, W.[Wei],
Liu, Y.L.[Yu-Long],
A Sharp Analysis of Covariate Adjusted Precision Matrix Estimation
via Alternating Projected Gradient Descent,
SPLetters(29), No. 2022, pp. 877-881.
IEEE DOI
2204
Signal processing algorithms, Convergence, Estimation,
Complexity theory, Estimation error, Linear regression,
alternating gradient descent
BibRef
Jin, B.[Bangti],
Kereta, Z.[Zeljko],
On the Convergence of Stochastic Gradient Descent for Linear Inverse
Problems in Banach Spaces,
SIIMS(16), No. 2, 2023, pp. 671-705.
DOI Link
2306
BibRef
Lazzaretti, M.[Marta],
Kereta, Z.[Zeljko],
Estatico, C.[Claudio],
Calatroni, L.[Luca],
Stochastic Gradient Descent for Linear Inverse Problems in Variable
Exponent Lebesgue Spaces,
SSVM23(457-470).
Springer DOI
2307
BibRef
Huang, F.H.[Fei-Hu],
Gao, S.Q.[Shang-Qian],
Gradient Descent Ascent for Minimax Problems on Riemannian Manifolds,
PAMI(45), No. 7, July 2023, pp. 8466-8476.
IEEE DOI
2306
Manifolds, Optimization, Training, Machine learning,
Complexity theory, Principal component analysis, Neural networks,
stiefel manifold
BibRef
Bigolin-Lanfredi, R.[Ricardo],
Schroeder, J.D.[Joyce D.],
Tasdizen, T.[Tolga],
Quantifying the preferential direction of the model gradient in
adversarial training with projected gradient descent,
PR(139), 2023, pp. 109430.
Elsevier DOI
2304
Robustness, Robust models, Gradient direction,
Gradient alignment, Deep learning, PGD, Adversarial training, GAN
BibRef
Fermanian, R.[Rita],
Pendu, M.L.[Mikael Le],
Guillemot, C.[Christine],
PnP-ReG: Learned Regularizing Gradient for Plug-and-Play Gradient
Descent,
SIIMS(16), No. 2, 2023, pp. 585-613.
DOI Link
2306
BibRef
Pasadakis, D.[Dimosthenis],
Bollhöfer, M.[Matthias],
Schenk, O.[Olaf],
Sparse Quadratic Approximation for Graph Learning,
PAMI(45), No. 9, September 2023, pp. 11256-11269.
IEEE DOI
2309
BibRef
Hurault, S.[Samuel],
Chambolle, A.[Antonin],
Leclaire, A.[Arthur],
Papadakis, N.[Nicolas],
A Relaxed Proximal Gradient Descent Algorithm for Convergent
Plug-and-play with Proximal Denoiser,
SSVM23(379-392).
Springer DOI
2307
BibRef
Barbano, R.[Riccardo],
Zhang, C.[Chen],
Arridge, S.[Simon],
Jin, B.[Bangti],
Quantifying Model Uncertainty in Inverse Problems via Bayesian Deep
Gradient Descent,
ICPR21(1392-1399)
IEEE DOI
2105
Training, Uncertainty, Inverse problems, Computational modeling,
Scalability, Neural networks, Reconstruction algorithms
BibRef
Liu, H.K.[Hui-Kang],
Wang, X.L.[Xiao-Lu],
Li, J.J.[Jia-Jin],
So, A.M.C.[Anthony Man-Cho],
Low-Cost Lipschitz-Independent Adaptive Importance Sampling of
Stochastic Gradients,
ICPR21(2150-2157)
IEEE DOI
2105
Gradient descent.
Training, Monte Carlo methods, Upper bound, Neural networks,
Training data, Sampling methods
BibRef
Zhuo, L.,
Zhang, B.,
Yang, L.,
Chen, H.,
Ye, Q.,
Doermann, D.,
Ji, R.,
Guo, G.,
Cogradient Descent for Bilinear Optimization,
CVPR20(7956-7964)
IEEE DOI
2008
Optimization, Convergence, Training, Convolutional codes, Kernel,
Filtering algorithms, Machine learning
BibRef
Volhejn, V.[Václav],
Lampert, C.H.[Christoph H.],
Does SGD Implicitly Optimize for Smoothness?,
GCPR20(246-259).
Springer DOI
2110
stochastic gradient descent.
BibRef
Kobayashi, T.[Takumi],
SCW-SGD: Stochastically Confidence-Weighted SGD,
ICIP20(1746-1750)
IEEE DOI
2011
Stochastic Gradient Descent.
Uncertainty, Perturbation methods, Training, Stochastic processes,
Neural networks, Optimization, Robustness, Neural Network,
Stochastic weighting
BibRef
Hsueh, B.,
Li, W.,
Wu, I.,
Stochastic Gradient Descent With Hyperbolic-Tangent Decay on
Classification,
WACV19(435-442)
IEEE DOI
1904
condition monitoring, gradient methods,
learning (artificial intelligence), neural nets,
Light rail systems
BibRef
Rodriguez, P.,
Accelerated Gradient Descent Method for Projections onto the L_1-Ball,
IVMSP18(1-5)
IEEE DOI
1809
Acceleration, Newton method, Optimization, Extrapolation,
Electrical engineering, Indexes,
Accelerated gradient descent
BibRef
Larsson, M.[Måns],
Arnab, A.[Anurag],
Kahl, F.[Fredrik],
Zheng, S.[Shuai],
Torr, P.H.S.[Philip H.S.],
A Projected Gradient Descent Method for CRF Inference Allowing
End-to-End Training of Arbitrary Pairwise Potentials,
EMMCVPR17(564-579).
Springer DOI
1805
BibRef
Roy, S.K.,
Harandi, M.,
Constrained Stochastic Gradient Descent: The Good Practice,
DICTA17(1-8)
IEEE DOI
1804
geometry, gradient methods, learning (artificial intelligence),
optimisation, stochastic processes, Riemannian geometry,
Symmetric matrices
BibRef
Luo, Z.J.[Zhi-Jian],
Liao, D.P.[Dan-Ping],
Qian, Y.T.[Yun-Tao],
Bound analysis of natural gradient descent in stochastic optimization
setting,
ICPR16(4166-4171)
IEEE DOI
1705
Computer science, Convergence, Extraterrestrial measurements,
Mirrors, Neural networks, Optimization, Bound Analysis,
Mirror Gradient, Natural Gradient, Riemannian Space, Stochastic, Optimization
BibRef
Yildiz, A.[Alparslan],
Akgul, Y.S.[Yusuf Sinan],
A Gradient Descent Approximation for Graph Cuts,
DAGM09(312-321).
Springer DOI
0909
BibRef
Ishikawa, H.[Hiroshi],
Higher-order gradient descent by fusion-move graph cut,
ICCV09(568-574).
IEEE DOI
0909
BibRef
And:
Higher-order clique reduction in binary graph cut,
CVPR09(2993-3000).
IEEE DOI
0906
BibRef
Chapter on Matching and Recognition Using Volumes, High Level Vision Techniques, Invariants continues in
Bayesian Networks, Bayes Nets .