-
Notifications
You must be signed in to change notification settings - Fork 216
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enhance aggregator to skip round check for evaluation only flow #1258
Enhance aggregator to skip round check for evaluation only flow #1258
Conversation
1c23479
to
c7fecc5
Compare
fb94d77
to
df65195
Compare
1008b8f
to
7c7a82a
Compare
6b75d7b
to
ded0375
Compare
ded0375
to
17f4d99
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for addressing this, @ishaileshpant! This way, OpenFL 1.7+ users would be able to use FedEval in a meaningful way (more improvements coming in 1.8).
PS: I just have one more suggestion, in addition to the latest set of comments by @MasterSkepticista
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @ishaileshpant - very glad to see you were able to run an evaluation flow on a pretrained model. I have some comments that I think will make this PR more "standalone"
openfl-workspace/workspace/plan/defaults/federated-evaluation/assigner.yaml
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As noted offline, this PR successfully adds an aggregator CLI flag to set the task group and enables ability to run an FL round on a pretrained model.
My other comments can be addressed in #1226 or related PR
Thanks @ishaileshpant !
17f4d99
to
52453e4
Compare
b8f8e49
to
11692ab
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work, @ishaileshpant ! I've just caught a couple of typos to correct in the tutorial document. Other than that, the PR should be good to go!
22441a1
to
5d4f3a2
Compare
Fixed all the typos and nit @teoparvanov |
823599b
to
05fc4ce
Compare
…rning" - enhance Aggregator to take selected_task_group attribute to enable fedeval or learning switching at aggregator level - rebase 16.Jan.1 - fix aggregator cli test cases as per new "selected_task_group" field in start - changed default assigner task_group name to "learning" and "evaluation" - updated worspaces to use new task_group names - learning / evaluation - updated as per review comments - update the FedEval documentation with e2e usage steps - Rebased 15-Jan.1 - Fixed docs indentation issue,reduced the verbosity in doc Signed-off-by: Shailesh Pant <[email protected]>
05fc4ce
to
f3211e7
Compare
Testing steps / results
We will use same plan for both training and evaluation using aggregator mode to skip the round check in evaluate mode. This we can do either of the following ways taking torch_cnn_mnist and torch_cnn_mnist_fed_eval ws as examples:
or
we can also change two setting in plan to make it ready for evaluation
in this test we take second approach
Since the only task executed was evaulation / aggregated_model_validation the best and last pbuf files as seen below are still random weights but the init is the replaced trained model pbuf and same was used for reporting the accuracy.