Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

enhance: reuse thread pool for scheduled service #1389

Closed
wants to merge 1 commit into from

Conversation

Lo1nt
Copy link
Collaborator

@Lo1nt Lo1nt commented Jan 15, 2024

Motivation:

It might be exhausted when create new executor pool for some scheduled cases such as reconnection, where each consumer may has its own scheduled executor pool and its own thread. Amount of threads may grow to O(m) aligns with amount of consumers.

Modification:

add a reused schedule pool that may provide same pool with same unique key

Result:

@sofastack-cla sofastack-cla bot added cla:yes CLA is ok size/L labels Jan 15, 2024
Copy link

codecov bot commented Jan 15, 2024

Codecov Report

Attention: 39 lines in your changes are missing coverage. Please review.

Comparison is base (9faa8b8) 72.04% compared to head (8ffa761) 72.03%.
Report is 4 commits behind head on master.

Files Patch % Lines
...va/com/alipay/sofa/rpc/server/bolt/BoltServer.java 53.06% 16 Missing and 7 partials ⚠️
.../sofa/rpc/dynamic/DynamicConfigManagerFactory.java 0.00% 6 Missing ⚠️
...ava/com/alipay/sofa/rpc/server/UserThreadPool.java 75.00% 2 Missing and 2 partials ⚠️
...ay/sofa/rpc/client/AllConnectConnectionHolder.java 75.00% 1 Missing ⚠️
...sofa/rpc/common/threadpool/ThreadPoolConstant.java 0.00% 1 Missing ⚠️
...threadpool/extension/VirtualThreadPoolFactory.java 50.00% 1 Missing ⚠️
.../alipay/sofa/rpc/server/UserVirtualThreadPool.java 83.33% 1 Missing ⚠️
...a/com/alipay/sofa/rpc/event/LookoutSubscriber.java 66.66% 0 Missing and 1 partial ⚠️
...ipay/sofa/rpc/server/bolt/BoltServerProcessor.java 50.00% 1 Missing ⚠️
Additional details and impacted files
@@             Coverage Diff              @@
##             master    #1389      +/-   ##
============================================
- Coverage     72.04%   72.03%   -0.02%     
- Complexity      785      792       +7     
============================================
  Files           417      423       +6     
  Lines         17709    17807      +98     
  Branches       2760     2768       +8     
============================================
+ Hits          12759    12827      +68     
- Misses         3546     3567      +21     
- Partials       1404     1413       +9     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@Lo1nt Lo1nt requested a review from EvenLjj January 16, 2024 02:16
@Extension("reuse-scheduled")
public class ReuseScheduledThreadPoolFactory implements SofaExecutorFactory {

Map<String, Executor> uniqueMap = new ConcurrentHashMap<>();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

最好设置成私有的,另外也要考虑下如何回收这里的Executor。

* @author junyuan
* @version ReuseScheduledThreadPoolFactory.java, v 0.1 2024-01-15 19:44 junyuan Exp $
*/
@Extension("reuse-scheduled")
Copy link
Collaborator

@EvenLjj EvenLjj Jan 16, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

容易和 BoltServer 中的处理线程混淆,RPC 中有很多的其他异步线程,是否可以考虑一下,其他异步线程也做个 Extension 或者单独搞个类管理这种线程池。


@Override
public Executor createExecutor(String namePrefix, ServerConfig serverConfig) {
return uniqueMap.computeIfAbsent(namePrefix, key -> new ScheduledThreadPoolExecutor(1, new NamedThreadFactory(namePrefix, true)));
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

之前 coreSize 设置为 1,每个 Consumer 一个 Schedule 单一线程。 更换成共享一个 Schedule 线程池的话,如果还是使用单一线程的话,会不会存在性能问题和更新不及时的问题?

@Lo1nt Lo1nt closed this Jan 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants