From e056e6d9f07a1e866b190a053ec5ca1314b3eef5 Mon Sep 17 00:00:00 2001 From: Shivam Saboo Date: Fri, 29 Mar 2019 22:41:01 +0530 Subject: [PATCH] Update README.md --- README.md | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 9ca2da8..1223009 100644 --- a/README.md +++ b/README.md @@ -1,8 +1,17 @@ # Overcoming-Catastrophic-forgetting-in-Neural-Networks Elastic weight consolidation technique for incremental learning. -!! Testing in progress !! ## About Use this API if you dont want your neural network to forget previously learnt tasks while doing transfer learning or domain adaption! +## Results +The experiment is done as follow:
+1. Train a 2 layer feed forward neural network on MNIST for 4 epochs +2. Train the same network later on Fashion-MNIST for 4 epochs +This is done once with EWC and then without EWC and results are calculated on test data for both data on same model. Constant learning rate of 1e-4 is used throughout with Adam Optimizer. Importance multiplier is kept at 10e5 and sampling is done with half data before moving to next dataset
+ +| EWC | MNIST | Fashion-MNIST | +| --- | ----- | ------------- | +| Yes | 70.27 | 81.88 | +| No | 48.43 | 86.69 | ## Usage ```python from elastic_weight_consolidation import ElasticWeightConsolidation