14.5.10.8.13 Forgetting, Learning without Forgetting, Convolutional Neural Networks

Chapter Contents (Back)
Convolutional Neural Networks. Forgetting. CNN.
See also CNN Intrepretation, Explanation, Understanding of Convolutional Neural Networks.
See also Continual Learning.
See also Dynamic Learning, Incremental Learning.
See also Explainable Aritficial Intelligence.

Li, Z.Z.[Zhi-Zhong], Hoiem, D.[Derek],
Learning Without Forgetting,
PAMI(40), No. 12, December 2018, pp. 2935-2947.
IEEE DOI 1811
BibRef
Earlier: ECCV16(IV: 614-629).
Springer DOI 1611
Keep the old results in NN, but learn new capability. Feature extraction, Deep learning, Training data, Neural networks, Convolutional neural networks, Knowledge engineering, visual recognition. BibRef

Li, Z.Z.[Zhi-Zhong], Hoiem, D.[Derek],
Improving Confidence Estimates for Unfamiliar Examples,
CVPR20(2683-2692)
IEEE DOI 2008
Training, Calibration, Dogs, Uncertainty, Cats, Task analysis, Testing BibRef

Schutera, M.[Mark], Hafner, F.M.[Frank M.], Abhau, J.[Jochen], Hagenmeyer, V.[Veit], Mikut, R.[Ralf], Reischl, M.[Markus],
Cuepervision: self-supervised learning for continuous domain adaptation without catastrophic forgetting,
IVC(106), 2021, pp. 104079.
Elsevier DOI 2102
Domain adaptation, Self-supervised learning, Unsupervised learning, Continuous transfer learning, MNIST dataset BibRef

Osman, I.[Islam], Eltantawy, A.[Agwad], Shehata, M.S.[Mohamed S.],
Task-based parameter isolation for foreground segmentation without catastrophic forgetting using multi-scale region and edges fusion network,
IVC(113), 2021, pp. 104248.
Elsevier DOI 2108
Foreground segmentation, Moving objects, Deep learning, Continual learning, Parameter isolation BibRef

Toohey, J.R.[Jack R.], Raunak, M.S., Binkley, D.[David],
From Neuron Coverage to Steering Angle: Testing Autonomous Vehicles Effectively,
Computer(54), No. 8, August 2021, pp. 77-85.
IEEE DOI 2108
Create new images to rtain an existing DNN, without forgetting. Deep learning, Neurons, Autonomous vehicles, Testing BibRef

Zhang, M.[Miao], Li, H.Q.[Hui-Qi], Pan, S.R.[Shi-Rui], Chang, X.J.[Xiao-Jun], Zhou, C.[Chuan], Ge, Z.Y.[Zong-Yuan], Su, S.[Steven],
One-Shot Neural Architecture Search: Maximising Diversity to Overcome Catastrophic Forgetting,
PAMI(43), No. 9, September 2021, pp. 2921-2935.
IEEE DOI 2108
Training, Optimization, Neural networks, Search methods, Australia, Germanium, AutoML, novelty search BibRef

Lao, Q.C.[Qi-Cheng], Mortazavi, M.[Mehrzad], Tahaei, M.[Marzieh], Dutil, F.[Francis], Fevens, T.[Thomas], Havaei, M.[Mohammad],
FoCL: Feature-oriented continual learning for generative models,
PR(120), 2021, pp. 108127.
Elsevier DOI 2109
Catastrophic forgetting, Continual learning, Generative models, Feature matching, Generative replay, Pseudo-rehearsal BibRef

Peng, C.[Can], Zhao, K.[Kun], Maksoud, S.[Sam], Li, M.[Meng], Lovell, B.C.[Brian C.],
SID: Incremental learning for anchor-free object detection via Selective and Inter-related Distillation,
CVIU(210), 2021, pp. 103229.
Elsevier DOI 2109
Deal with deep network failing on old task after new data -- catastrophic forgetting. Incremental learning, Object detection, Knowledge distillation BibRef

Wang, M.[Meng], Guo, Z.B.[Zheng-Bing], Li, H.F.[Hua-Feng],
A dynamic routing CapsNet based on increment prototype clustering for overcoming catastrophic forgetting,
IET-CV(16), No. 1, 2022, pp. 83-97.
DOI Link 2202
capsule network, catastrophic forgetting, continual learning, dynamic routing, prototype clustering BibRef

Marconato, E.[Emanuele], Bontempo, G.[Gianpaolo], Teso, S.[Stefano], Ficarra, E.[Elisa], Calderara, S.[Simone], Passerini, A.[Andrea],
Catastrophic Forgetting in Continual Concept Bottleneck Models,
CL4REAL22(539-547).
Springer DOI 2208
BibRef

Baik, S.[Sungyong], Oh, J.[Junghoon], Hong, S.[Seokil], Lee, K.M.[Kyoung Mu],
Learning to Forget for Meta-Learning via Task-and-Layer-Wise Attenuation,
PAMI(44), No. 11, November 2022, pp. 7718-7730.
IEEE DOI 2210
Task analysis, Optimization, Adaptation models, Attenuation, Knowledge engineering, Visualization, Neural networks, visual tracking BibRef

Boschini, M.[Matteo], Buzzega, P.[Pietro], Bonicelli, L.[Lorenzo], Porrello, A.[Angelo], Calderara, S.[Simone],
Continual semi-supervised learning through contrastive interpolation consistency,
PRL(162), 2022, pp. 9-14.
Elsevier DOI 2210
Continual learning, Deep learning, Semi-supervised learning, Weak supervision, Catastrophic forgetting BibRef

Huang, F.X.[Fu-Xian], Li, W.C.[Wei-Chao], Lin, Y.[Yining], Ji, N.[Naye], Li, S.J.[Shi-Jian], Li, X.[Xi],
Memory-efficient distribution-guided experience sampling for policy consolidation,
PRL(164), 2022, pp. 126-131.
Elsevier DOI 2212
Learn new skills in sequence without forgetting old skills. Reinforcement learning, Policy consolidation, Distribution-guided sampling, Memory efficiency, Distributional neural network BibRef

Ma, R.[Rui], Wu, Q.B.[Qing-Bo], Ngan, K.N.[King Ngi], Li, H.L.[Hong-Liang], Meng, F.M.[Fan-Man], Xu, L.F.[Lin-Feng],
Forgetting to Remember: A Scalable Incremental Learning Framework for Cross-Task Blind Image Quality Assessment,
MultMed(25), 2023, pp. 8817-8827.
IEEE DOI 2312
BibRef

Benko, B.[Beatrix],
Example forgetting and rehearsal in continual learning,
PRL(179), 2024, pp. 65-72.
Elsevier DOI 2403
Continual learning, Catastrophic forgetting, Rehearsal exemplar selection BibRef

Qu, Y.Y.[You-Yang], Yuan, X.[Xin], Ding, M.[Ming], Ni, W.[Wei], Rakotoarivelo, T.[Thierry], Smith, D.[David],
Learn to Unlearn: Insights Into Machine Unlearning,
Computer(57), No. 3, March 2024, pp. 79-90.
IEEE DOI 2403
Privacy, Reviews, Machine learning, Resilience BibRef

Zhou, D.[DaiLiang], Song, Y.[YongHong],
PNSP: Overcoming catastrophic forgetting using Primary Null Space Projection in continual learning,
PRL(179), 2024, pp. 137-143.
Elsevier DOI 2403
Continual learning, Catastrophic forgetting, Null space, Low-rank approximation, Feature alignment BibRef

Hassan, M.A.[Mohamed Abubakr], Lee, C.G.[Chi-Guhn],
Forget to Learn (F2L): Circumventing plasticity-stability trade-off in continuous unsupervised domain adaptation,
PR(159), 2025, pp. 111139.
Elsevier DOI 2412
Plasticity-stability dilemma, Continuous unsupervised domain adaptation, Forgetting BibRef


Huang, M.H.[Mark He], Foo, L.G.[Lin Geng], Liu, J.[Jun],
Learning to Unlearn for Robust Machine Unlearning,
ECCV24(LII: 202-219).
Springer DOI 2412
BibRef

Zhang, J.[Jihai], Lan, X.[Xiang], Qu, X.Y.[Xiao-Ye], Cheng, Y.[Yu], Feng, M.L.[Meng-Ling], Hooi, B.[Bryan],
Learning the Unlearned: Mitigating Feature Suppression in Contrastive Learning,
ECCV24(LXXXIII: 35-52).
Springer DOI 2412
BibRef

Zhang, Y.M.[Yi-Meng], Jia, J.H.[Jing-Han], Chen, X.[Xin], Chen, A.[Aochuan], Zhang, Y.H.[Yi-Hua], Liu, J.C.[Jian-Cheng], Ding, K.[Ke], Liu, S.[Sijia],
To Generate or Not? Safety-driven Unlearned Diffusion Models Are Still Easy to Generate Unsafe Images ... For Now,
ECCV24(LVII: 385-403).
Springer DOI 2412
BibRef

Cheng, J.L.[Jia-Li], Amiri, H.[Hadi],
Multidelete for Multimodal Machine Unlearning,
ECCV24(XLI: 165-184).
Springer DOI 2412
BibRef

Bhatt, G.[Gaurav], Ross, J.[James], Sigal, L.[Leonid],
Preventing Catastrophic Forgetting Through Memory Networks in Continuous Detection,
ECCV24(LXXXIV: 442-458).
Springer DOI 2412
BibRef

Medeiros, H.R.[Heitor Rapela], Aminbeidokhti, M.[Masih], Peña, F.A.G.[Fidel Alejandro Guerrero], Latortue, D.[David], Granger, E.[Eric], Pedersoli, M.[Marco],
Modality Translation for Object Detection Adaptation Without Forgetting Prior Knowledge,
ECCV24(LXXXIX: 51-68).
Springer DOI 2412
BibRef

Zheng, M.Y.[Meng-Yu], Tang, Y.[Yehui], Hao, Z.W.[Zhi-Wei], Han, K.[Kai], Wang, Y.H.[Yun-He], Xu, C.[Chang],
Adapt Without Forgetting: Distill Proximity from Dual Teachers in Vision-language Models,
ECCV24(LIV: 109-125).
Springer DOI 2412
BibRef

Fan, C.[Chongyu], Liu, J.C.[Jian-Cheng], Hero, A.[Alfred], Liu, S.[Sijia],
Challenging Forgets: Unveiling the Worst-case Forget Sets in Machine Unlearning,
ECCV24(XXI: 278-297).
Springer DOI 2412
BibRef

Zhang, W.X.[Wen-Xuan], Janson, P.[Paul], Aljundi, R.[Rahaf], Elhoseiny, M.[Mohamed],
Overcoming Generic Knowledge Loss with Selective Parameter Update,
CVPR24(24046-24056)
IEEE DOI Code:
WWW Link. 2410
Accuracy, Codes, Knowledge based systems BibRef

Choi, D.[Dasol], Choi, S.[Soora], Lee, E.[Eunsun], Seo, J.[Jinwoo], Na, D.B.[Dong-Bin],
Towards Efficient Machine Unlearning with Data Augmentation: Guided Loss-Increasing (GLI) to Prevent the Catastrophic Model Utility Drop,
FaDE-TCV24(93-102)
IEEE DOI Code:
WWW Link. 2410
Measurement, Training, Source coding, Computational modeling, Data augmentation BibRef

Seo, J.[Juwon], Lee, S.H.[Sung-Hoon], Lee, T.Y.[Tae-Young], Moon, S.[Seungjun], Park, G.M.[Gyeong-Moon],
Generative Unlearning for Any Identity,
CVPR24(9151-9161)
IEEE DOI Code:
WWW Link. 2410
Industries, Privacy, Extrapolation, Codes, Generative adversarial networks, Generators, GAN BibRef

Lim, G.[Guihong], Hsu, H.[Hsiang], Chen, C.F.R.[Chun-Fu Richard], Marculescu, R.[Radu],
Fast-NTK: Parameter-Efficient Unlearning for Large-Scale Models,
FaDE-TCV24(227-234)
IEEE DOI 2410
Visualization, Computational modeling, Scalability, Artificial neural networks, Machine learning BibRef

Zhou, Y.H.[Yu-Hang], Hua, Z.Y.[Zhong-Yun],
Defense without Forgetting: Continual Adversarial Defense with Anisotropic & Isotropic Pseudo Replay,
CVPR24(24263-24272)
IEEE DOI 2410
Training, Manifolds, Upper bound, Atmospheric modeling, Semantics, Anisotropic, adversarial attack and defense, continual learning BibRef

Cao, X.Z.[Xin-Zi], Zheng, X.[Xiawu], Wang, G.[Guanhong], Yu, W.J.[Wei-Jiang], Shen, Y.[Yunhang], Li, K.[Ke], Lu, Y.T.[Yu-Tong], Tian, Y.H.[Yong-Hong],
Solving the Catastrophic Forgetting Problem in Generalized Category Discovery,
CVPR24(16880-16889)
IEEE DOI Code:
WWW Link. 2410
Accuracy, Image recognition, Codes, Predictive models, Entropy, Generalized Category Discovery, Catastrophic Forgetting BibRef

Li, Y.C.[Yi-Chen], Li, Q.[Qunwei], Wang, H.Z.[Hao-Zhao], Li, R.X.[Rui-Xuan], Zhong, W.L.[Wen-Liang], Zhang, G.[Guannan],
Towards Efficient Replay in Federated Incremental Learning,
CVPR24(12820-12829)
IEEE DOI 2410
Incremental learning, Federated learning, Federated Learning, Continual Learning, Data Heterogeneity, Catastrophic Forgetting BibRef

Hoang, T.[Tuan], Rana, S.[Santu], Gupta, S.I.[Sun-Il], Venkatesh, S.[Svetha],
Learn to Unlearn for Deep Neural Networks: Minimizing Unlearning Interference with Gradient Projection,
WACV24(4807-4816)
IEEE DOI Code:
WWW Link. 2404
Training, Measurement, Learning systems, Data privacy, Training data, Stochastic processes, Algorithms, Explainable, fair, accountable, ethical computer vision BibRef

Dukler, Y.[Yonatan], Bowman, B.[Benjamin], Achille, A.[Alessandro], Golatkar, A.[Aditya], Swaminathan, A.[Ashwin], Soatto, S.[Stefano],
SAFE: Machine Unlearning With Shard Graphs,
ICCV23(17062-17072)
IEEE DOI 2401
BibRef

Liu, J.[Junxu], Xue, M.S.[Ming-Sheng], Lou, J.[Jian], Zhang, X.Y.[Xiao-Yu], Xiong, L.[Li], Qin, Z.[Zhan],
MUter: Machine Unlearning on Adversarially Trained Models,
ICCV23(4869-4879)
IEEE DOI 2401
BibRef

Khattak, M.U.[Muhammad Uzair], Wasim, S.T.[Syed Talal], Naseer, M.[Muzammal], Khan, S.[Salman], Yang, M.H.[Ming-Hsuan], Khan, F.S.[Fahad Shahbaz],
Self-regulating Prompts: Foundational Model Adaptation without Forgetting,
ICCV23(15144-15154)
IEEE DOI Code:
WWW Link. 2401
BibRef

Chen, T.A.[Ting-An], Yang, D.N.[De-Nian], Chen, M.S.[Ming-Syan],
Overcoming Forgetting Catastrophe in Quantization-Aware Training,
ICCV23(17312-17321)
IEEE DOI 2401
BibRef

Kang, M.X.[Meng-Xue], Zhang, J.P.[Jin-Peng], Zhang, J.M.[Jin-Ming], Wang, X.S.[Xia-Shuang], Chen, Y.[Yang], Ma, Z.[Zhe], Huang, X.[Xuhui],
Alleviating Catastrophic Forgetting of Incremental Object Detection via Within-Class and Between-Class Knowledge Distillation,
ICCV23(18848-18858)
IEEE DOI 2401
BibRef

Chen, M.[Min], Gao, W.Z.[Wei-Zhuo], Liu, G.[Gaoyang], Peng, K.[Kai], Wang, C.[Chen],
Boundary Unlearning: Rapid Forgetting of Deep Networks via Shifting the Decision Boundary,
CVPR23(7766-7775)
IEEE DOI 2309
BibRef

Carrión, S.[Salvador], Casacuberta, F.[Francisco],
Continual Vocabularies to Tackle the Catastrophic Forgetting Problem in Machine Translation,
IbPRIA23(94-107).
Springer DOI 2307
BibRef

Kalb, T.[Tobias], Beyerer, J.[Jürgen],
Causes of Catastrophic Forgetting in Class-incremental Semantic Segmentation,
ACCV22(VII:361-377).
Springer DOI 2307
BibRef

Qu, Z.N.[Zhong-Nan], Liu, C.[Cong], Thiele, L.[Lothar],
Deep Partial Updating: Towards Communication Efficient Updating for On-Device Inference,
ECCV22(XI:137-153).
Springer DOI 2211
BibRef

Ye, J.W.[Jing-Wen], Fu, Y.F.[Yi-Fang], Song, J.[Jie], Yang, X.Y.[Xing-Yi], Liu, S.[Songhua], Jin, X.[Xin], Song, M.L.[Ming-Li], Wang, X.C.[Xin-Chao],
Learning with Recoverable Forgetting,
ECCV22(XI:87-103).
Springer DOI 2211
BibRef

Singh, P.[Pravendra], Mazumder, P.[Pratik], Karim, M.A.[Mohammed Asad],
Attaining Class-Level Forgetting in Pretrained Model Using Few Samples,
ECCV22(XIII:433-448).
Springer DOI 2211
BibRef

Wang, Z.Y.[Zhen-Yi], Shen, L.[Li], Fang, L.[Le], Suo, Q.L.[Qiu-Ling], Zhan, D.L.[Dong-Lin], Duan, T.[Tiehang], Gao, M.C.[Ming-Chen],
Meta-Learning with Less Forgetting on Large-Scale Non-Stationary Task Distributions,
ECCV22(XX:221-238).
Springer DOI 2211
BibRef

Boschini, M.[Matteo], Bonicelli, L.[Lorenzo], Porrello, A.[Angelo], Bellitto, G.[Giovanni], Pennisi, M.[Matteo], Palazzo, S.[Simone], Spampinato, C.[Concetto], Calderara, S.[Simone],
Transfer Without Forgetting,
ECCV22(XXIII:692-709).
Springer DOI 2211
BibRef

Liang, M.F.[Ming-Fu], Zhou, J.H.[Jia-Huan], Wei, W.[Wei], Wu, Y.[Ying],
Balancing Between Forgetting and Acquisition in Incremental Subpopulation Learning,
ECCV22(XXVI:364-380).
Springer DOI 2211
BibRef

Mehta, R.[Ronak], Pal, S.[Sourav], Singh, V.[Vikas], Ravi, S.N.[Sathya N.],
Deep Unlearning via Randomized Conditionally Independent Hessians,
CVPR22(10412-10421)
IEEE DOI 2210
Training, Law, Computational modeling, Face recognition, Semantics, Legislation, Predictive models, Transparency, fairness, Statistical methods BibRef

Feng, T.[Tao], Wang, M.[Mang], Yuan, H.J.[Hang-Jie],
Overcoming Catastrophic Forgetting in Incremental Object Detection via Elastic Response Distillation,
CVPR22(9417-9426)
IEEE DOI 2210
Location awareness, Training, Codes, Object detection, Detectors, Feature extraction, retrieval, categorization, Recognition: detection BibRef

Kim, J.[Junyaup], Woo, S.S.[Simon S.],
Efficient Two-stage Model Retraining for Machine Unlearning,
HCIS22(4360-4368)
IEEE DOI 2210
Deep learning, Training, Computational modeling, Data models BibRef

Ferdinand, Q.[Quentin], Clement, B.[Benoit], Oliveau, Q.[Quentin], Chenadec, G.L.[Gilles Le], Papadakis, P.[Panagiotis],
Attenuating Catastrophic Forgetting by Joint Contrastive and Incremental Learning,
CLVision22(3781-3788)
IEEE DOI 2210
Learning systems, Training, Deep learning, Adaptation models, Conferences, Computational modeling BibRef

Jain, H.[Himalaya], Vu, T.H.[Tuan-Hung], Pérez, P.[Patrick], Cord, M.[Matthieu],
CSG0: Continual Urban Scene Generation with Zero Forgetting,
CLVision22(3678-3686)
IEEE DOI 2210
Training, Visualization, Costs, Semantics, Memory management, Generative adversarial networks BibRef

Meng, Q.[Qiang], Zhang, C.X.[Chi-Xiang], Xu, X.Q.[Xiao-Qiang], Zhou, F.[Feng],
Learning Compatible Embeddings,
ICCV21(9919-9928)
IEEE DOI 2203
Training, Degradation, Visualization, Costs, Codes, Image retrieval, Representation learning, Faces, Recognition and classification BibRef

Binici, K.[Kuluhan], Pham, N.T.[Nam Trung], Mitra, T.[Tulika], Leman, K.[Karianto],
Preventing Catastrophic Forgetting and Distribution Mismatch in Knowledge Distillation via Synthetic Data,
WACV22(3625-3633)
IEEE DOI 2202
Deep learning, Energy consumption, Computational modeling, Neural networks, Memory management, Image and Video Synthesis BibRef

Benkert, R.[Ryan], Aribido, O.J.[Oluwaseun Joseph], AlRegib, G.[Ghassan],
Explaining Deep Models Through Forgettable Learning Dynamics,
ICIP21(3692-3696)
IEEE DOI 2201
Training, Deep learning, Image segmentation, Semantics, Predictive models, Data models, Example Forgetting, Semantic Segmentation BibRef

Roy, S.[Soumya], Sau, B.B.[Bharat Bhusan],
Can Selfless Learning improve accuracy of a single classification task?,
WACV21(4043-4051)
IEEE DOI 2106
solve the problem of catastrophic forgetting in continual learning. Training, Neurons, Task analysis BibRef

Mundt, M.[Martin], Pliushch, I.[Iuliia], Ramesh, V.[Visvanathan],
Neural Architecture Search of Deep Priors: Towards Continual Learning without Catastrophic Interference,
CLVision21(3518-3527)
IEEE DOI 2109
Training, Neural networks, Interference BibRef

Katakol, S.[Sudeep], Herranz, L.[Luis], Yang, F.[Fei], Mrak, M.[Marta],
DANICE: Domain adaptation without forgetting in neural image compression,
CLIC21(1921-1925)
IEEE DOI 2109
Video coding, Image coding, Codecs, Transfer learning, Interference BibRef

Kurmi, V.K.[Vinod K.], Patro, B.N.[Badri N.], Subramanian, V.K.[Venkatesh K.], Namboodiri, V.P.[Vinay P.],
Do not Forget to Attend to Uncertainty while Mitigating Catastrophic Forgetting,
WACV21(736-745)
IEEE DOI 2106
Deep learning, Uncertainty, Computational modeling, Estimation, Data models BibRef

Nguyen, G.[Giang], Chen, S.[Shuan], Jun, T.J.[Tae Joon], Kim, D.[Daeyoung],
Explaining How Deep Neural Networks Forget by Deep Visualization,
EDL-AI20(162-173).
Springer DOI 2103
BibRef

Patra, A.[Arijit], Chakraborti, T.[Tapabrata],
Learn More, Forget Less: Cues from Human Brain,
ACCV20(IV:187-202).
Springer DOI 2103
BibRef

Liu, Y.[Yu], Parisot, S.[Sarah], Slabaugh, G.[Gregory], Jia, X.[Xu], Leonardis, A.[Ales], Tuytelaars, T.[Tinne],
More Classifiers, Less Forgetting: A Generic Multi-classifier Paradigm for Incremental Learning,
ECCV20(XXVI:699-716).
Springer DOI 2011
BibRef

Hayes, T.L.[Tyler L.], Kafle, K.[Kushal], Shrestha, R.[Robik], Acharya, M.[Manoj], Kanan, C.[Christopher],
Remind Your Neural Network to Prevent Catastrophic Forgetting,
ECCV20(VIII:466-483).
Springer DOI 2011
BibRef

Golatkar, A.[Aditya], Achille, A.[Alessandro], Soatto, S.[Stefano],
Forgetting Outside the Box: Scrubbing Deep Networks of Information Accessible from Input-output Observations,
ECCV20(XXIX: 383-398).
Springer DOI 2010
BibRef

Baik, S., Hong, S., Lee, K.M.,
Learning to Forget for Meta-Learning,
CVPR20(2376-2384)
IEEE DOI 2008
Task analysis, Attenuation, Adaptation models, Optimization, Training, Neural networks, Loss measurement BibRef

Zhang, Z., Lathuilière, S., Ricci, E., Sebe, N., Yan, Y., Yang, J.,
Online Depth Learning Against Forgetting in Monocular Videos,
CVPR20(4493-4502)
IEEE DOI 2008
Adaptation models, Videos, Estimation, Task analysis, Robustness, Machine learning, Training BibRef

Davidson, G., Mozer, M.C.,
Sequential Mastery of Multiple Visual Tasks: Networks Naturally Learn to Learn and Forget to Forget,
CVPR20(9279-9290)
IEEE DOI 2008
Task analysis, Training, Visualization, Standards, Neural networks, Color, Interference BibRef

Masarczyk, W., Tautkute, I.,
Reducing catastrophic forgetting with learning on synthetic data,
CLVision20(1019-1024)
IEEE DOI 2008
Task analysis, Optimization, Generators, Data models, Neural networks, Training, Computer architecture BibRef

Golatkar, A., Achille, A., Soatto, S.,
Eternal Sunshine of the Spotless Net: Selective Forgetting in Deep Networks,
CVPR20(9301-9309)
IEEE DOI 2008
Training, Neural networks, Data models, Stochastic processes, Task analysis, Training data BibRef

Lee, K., Lee, K., Shin, J., Lee, H.,
Overcoming Catastrophic Forgetting With Unlabeled Data in the Wild,
ICCV19(312-321)
IEEE DOI 2004
Code, Neural Networks.
WWW Link. image sampling, learning (artificial intelligence), neural nets, distillation loss, global distillation, learning strategy, Neural networks BibRef

Nwe, T.L.[Tin Lay], Nataraj, B.[Balaji], Xie, S.D.[Shu-Dong], Li, Y.Q.[Yi-Qun], Lin, D.Y.[Dong-Yun], Sheng, D.[Dong],
Discriminative Features for Incremental Learning Classifier,
ICIP19(1990-1994)
IEEE DOI 1910
Incremental learning, Context Aware Advertisement, Few-short incremental learning, Discriminative features, Catastrophic forgetting BibRef

Shmelkov, K., Schmid, C., Alahari, K.,
Incremental Learning of Object Detectors without Catastrophic Forgetting,
ICCV17(3420-3429)
IEEE DOI 1802
learning (artificial intelligence), neural nets, object detection, COCO datasets, PASCAL VOC 2007, annotations, Training data BibRef

Liu, X.L.[Xia-Lei], Masana, M., Herranz, L., van de Weijer, J.[Joost], López, A.M., Bagdanov, A.D.[Andrew D.],
Rotate your Networks: Better Weight Consolidation and Less Catastrophic Forgetting,
ICPR18(2262-2268)
IEEE DOI 1812
Task analysis, Training, Training data, Neural networks, Data models, Standards BibRef

Chapter on Pattern Recognition, Clustering, Statistics, Grammars, Learning, Neural Nets, Genetic Algorithms continues in
Implicit Neural Networks, Implicit Neural Representation .


Last update:Jan 15, 2025 at 14:36:47