-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat/raae 9/repo upgrades #23
Conversation
// Use IntelliSense to learn about possible attributes. | ||
// Hover to view descriptions of existing attributes. | ||
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387 | ||
"version": "0.2.0", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the setup for vscode debugger. I like to check these in in case people want it for development purposes since it's a reference architecture.
|
||
FROM python:3.8-slim-buster AS ApiImage | ||
FROM python:3.11-slim-buster AS ApiImage |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tylerhutcherson slim worked fine for this one so kept the smaller sizes
vector_distance: float | ||
similarity_score: float | ||
|
||
def __init__(self, *args, **kwargs): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
similarity score is derived from vector_distance so we can let pydantic handle the creation as part of making the object
product_vectors = json.load(f) | ||
except FileNotFoundError: | ||
print("File not found, reading from S3") | ||
product_vectors = read_from_s3() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tylerhutcherson I think I'd like to simplify this flow. Either load from local volume or pull from S3. If you pull from S3 I don't think we need to write that data because it's only needed to populate the db and causes bloat in the docker containers.
I could add an additional script to pull this data into local for those developing locally but I think this is cleaner
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking great. left a few cleanup comments, but pretty awesome
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just the one comment. Lots of great work here
Primary changes: