Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove backticks on replacement #100

Open
salah5 opened this issue Jan 17, 2025 · 3 comments
Open

Remove backticks on replacement #100

salah5 opened this issue Jan 17, 2025 · 3 comments
Assignees
Labels
enhancement New feature or request

Comments

@salah5
Copy link

salah5 commented Jan 17, 2025

Is it intended behaviour that when using phantom mode for example and selecting the replace or append option, the code that gets inserted still includes markdown syntax such as triple backticks (e.g. python <llm code> ). It doesn't feel really good having to remove the backticks every time when inserting code.

@yaroslavyaroslav
Copy link
Owner

Fair, it's pretty dumb rn, it's also not that easy to achieve though, since actually phantoms content not proposed to be parsed after have been presented. But I'll look how to improve the UX of replacement/appending after I finish the grand refactoring anytime soon.

@lukebelz
Copy link

This and actual word filter, not just prompt engineering to remove the words. With deepseek r1 for example, it will always show the think section, followed by the code, and then more words. So there's still a lot of cleaning up to do. And for whatever reason, it also only shows 1 line for text. It doesn't show everything, so I have to click insert, and then I see everything.

This package is very close to being great. Just needs a push over the finish line. Thanks for building this.

@yaroslavyaroslav
Copy link
Owner

There's a catch with r1 and all the other reasoning models to come: they uses kinda html tags, which is what Phantoms work with, so after <thinking> arouse it starts to wait the closing tag to render the html. I have still have to think how to work around it within phantom. I even not sure how well it'll present such huge amount of text at once.

However, panel mode works quite reliable with reasoning models, I suggest you to stick with it for some time till it isn't fixed.

And could you elaborate what word filter are you thinking of?

@yaroslavyaroslav yaroslavyaroslav pinned this issue Jan 24, 2025
@yaroslavyaroslav yaroslavyaroslav self-assigned this Jan 24, 2025
@yaroslavyaroslav yaroslavyaroslav added the enhancement New feature or request label Jan 24, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants