Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(ast_tools): pre-compile #[ast] macro output #5796

Draft
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

overlookmotel
Copy link
Collaborator

@overlookmotel overlookmotel commented Sep 15, 2024

Work in progress.

Generate code for #[ast] macro with an oxc_ast_tools generator.

oxc_ast_macros crate now has no dependencies (no syn etc).

The guts of it is the code! macro. This is the same as quote!, except it evaluates to a TokenStream of code to create the TokenStream which quote! evaluates to.

Unfortunately I'm not seeing an improvement in compile times, because other dependencies of oxc_ast also use syn, so it doesn't drop from the dependency graph. The generated code is 3000 lines, and could likely be shrunk down, which also could help. The generated code is now 600 lines, and now there is a 5% reduction in debug build time for oxc_ast crate - but that's still a disappointingly small improvement.

Am opening this PR not necessarily with the intent of merging it, but to discuss whether this is maybe a useful approach. @rzvxa would appreciate your feedback.

Note: This technique would allow #[generate_derive] to inject the derived impl next to the type def, same as #[derive] does, which would solve the problem of types/fields needing to be pub(crate).

Copy link

graphite-app bot commented Sep 15, 2024

Your org has enabled the Graphite merge queue for merging into main

Add the label “0-merge” to the PR and Graphite will automatically add it to the merge queue when it’s ready to merge. Or use the label “hotfix” to add to the merge queue as a hot fix.

You must have a Graphite account and log in to Graphite in order to use the merge queue. Sign up using this link.

Copy link
Collaborator Author

overlookmotel commented Sep 15, 2024

Copy link

codspeed-hq bot commented Sep 15, 2024

CodSpeed Performance Report

Merging #5796 will not alter performance

Comparing 09-14-feat_ast_tools_pre-compile_ast_macro_output (90746ab) with main (bb95306)

Summary

✅ 29 untouched benchmarks

@Boshen Boshen changed the base branch from 09-14-refactor_ast_move_functions_to_top_level_in_ast_macro to graphite-base/5796 September 16, 2024 04:05
@Boshen Boshen force-pushed the 09-14-feat_ast_tools_pre-compile_ast_macro_output branch from f80c7af to 5fa7b5a Compare September 16, 2024 04:21
@Boshen Boshen changed the base branch from graphite-base/5796 to main September 16, 2024 04:22
@Boshen Boshen force-pushed the 09-14-feat_ast_tools_pre-compile_ast_macro_output branch from 5fa7b5a to 502fd10 Compare September 16, 2024 04:22
@overlookmotel overlookmotel force-pushed the 09-14-feat_ast_tools_pre-compile_ast_macro_output branch from 502fd10 to a40e8c7 Compare September 16, 2024 13:08
@overlookmotel overlookmotel force-pushed the 09-14-feat_ast_tools_pre-compile_ast_macro_output branch from a40e8c7 to 90746ab Compare September 16, 2024 13:28
/// TokenTree::Group(Group::new(Delimiter::Parenthesis, args))
/// ].into_iter())
/// ```
macro_rules! code {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is super smart! I like what I'm seeing👍

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. But unfortunately it's not moving the needle on performance.

I suspect (but don't know) that creating TokenTrees and TokenStreams is inherently expensive due to how proc macros interface with the compiler. There may just be no way to make proc macros cheap.

Or maybe the #[ast] macro just isn't that costly in the first place, so optimizing it is never going to pay dividends.

Thanks for taking a look. I'm going to pause on this though for now as am planning to spend this week focused on transformer. Must... not... get... distracted...

By the way, these 2 articles were the inspiration. They do a very good job of explaining how proc macros work internally. I always thought it involved serialization, but it turns out not:

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or maybe the #[ast] macro just isn't that costly in the first place, so optimizing it is never going to pay dividends.

I think this is more likely; No matter how much proc macro overhead costs us, If we do enough work in a macro eventually caching it would be faster. I suspect we have a better chance of getting some juice out of this approach in #4112 However it might be a wild goose chase.

And thanks for the links, They seem really interesting especially the second part(gave them a cursory look), I'll give them a read.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-ast-tools Area - AST tools
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants