Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add XIP cache management routines #2005

Open
earlephilhower opened this issue Oct 29, 2024 · 1 comment · May be fixed by #2013
Open

Add XIP cache management routines #2005

earlephilhower opened this issue Oct 29, 2024 · 1 comment · May be fixed by #2013
Assignees
Milestone

Comments

@earlephilhower
Copy link
Contributor

The XIP cache works really well except in certain cases involving PSRAM and flash updates with a really non-obvious (IMHO) workaround.

Because cache cleaning needs some special magic to work with PSRAM as discussed in the links above, it may make sense to add some XIP cache management operations to the SDK (especially "XIP Clean"). Right now we have arduino-pico, CircuitPython and MicroPython all implementing the same XIP cache management code separately. It would be cleaner and safer to factor that out up into the SDK.

I imagine something as simple as

  • xip_cache_clean
  • xip_cache_invalidate
  • xip_cache_invalidate_range (?)

might be all that's needed, but maybe others have additional requirements?

Kind-of related to #1983, but at a higher level.

@will-v-pi
Copy link

Just to add to this, a pin function would also be useful

  • xip_cache_pin_range

@Wren6991 Wren6991 self-assigned this Nov 4, 2024
Wren6991 added a commit that referenced this issue Nov 4, 2024
Also add a cache clean to hardware_flash implementations, to avoid
losing pending writes on the subsequent invalidate.
@Wren6991 Wren6991 added this to the 2.1.0 milestone Nov 4, 2024
@lurch lurch linked a pull request Nov 4, 2024 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants