-
Notifications
You must be signed in to change notification settings - Fork 140
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat: Multi-tiered cache for aws #699
Merged
+185
−2
Merged
Changes from all commits
Commits
Show all changes
6 commits
Select commit
Hold shift + click to select a range
8a48c59
implement a multi-tiered cache for aws
conico974 f6c2914
fix linting
conico974 f1c4c83
changeset
conico974 6695a2b
review fix
conico974 0b0ccc4
Apply suggestions from code review
conico974 2f06186
added comment
conico974 File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,5 @@ | ||
--- | ||
"@opennextjs/aws": minor | ||
--- | ||
|
||
Add a new multi-tiered incremental cache |
143 changes: 143 additions & 0 deletions
143
packages/open-next/src/overrides/incrementalCache/multi-tier-ddb-s3.ts
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,143 @@ | ||
import type { CacheValue, IncrementalCache } from "types/overrides"; | ||
import { customFetchClient } from "utils/fetch"; | ||
import { LRUCache } from "utils/lru"; | ||
import { debug } from "../../adapters/logger"; | ||
import S3Cache, { getAwsClient } from "./s3-lite"; | ||
|
||
// TTL for the local cache in milliseconds | ||
const localCacheTTL = process.env.OPEN_NEXT_LOCAL_CACHE_TTL_MS | ||
? Number.parseInt(process.env.OPEN_NEXT_LOCAL_CACHE_TTL_MS, 10) | ||
: 0; | ||
// Maximum size of the local cache in nb of entries | ||
const maxCacheSize = process.env.OPEN_NEXT_LOCAL_CACHE_SIZE | ||
? Number.parseInt(process.env.OPEN_NEXT_LOCAL_CACHE_SIZE, 10) | ||
: 1000; | ||
|
||
const localCache = new LRUCache<{ | ||
value: CacheValue<false>; | ||
lastModified: number; | ||
}>(maxCacheSize); | ||
|
||
const awsFetch = (body: RequestInit["body"], type: "get" | "set" = "get") => { | ||
const { CACHE_BUCKET_REGION } = process.env; | ||
const client = getAwsClient(); | ||
return customFetchClient(client)( | ||
`https://dynamodb.${CACHE_BUCKET_REGION}.amazonaws.com`, | ||
{ | ||
method: "POST", | ||
headers: { | ||
"Content-Type": "application/x-amz-json-1.0", | ||
"X-Amz-Target": `DynamoDB_20120810.${ | ||
type === "get" ? "GetItem" : "PutItem" | ||
}`, | ||
}, | ||
body, | ||
}, | ||
); | ||
}; | ||
|
||
const buildDynamoKey = (key: string) => { | ||
const { NEXT_BUILD_ID } = process.env; | ||
return `__meta_${NEXT_BUILD_ID}_${key}`; | ||
}; | ||
|
||
/** | ||
* This cache implementation uses a multi-tier cache with a local cache, a DynamoDB metadata cache and an S3 cache. | ||
* It uses the same DynamoDB table as the default tag cache and the same S3 bucket as the default incremental cache. | ||
* It will first check the local cache. | ||
* If the local cache is expired, it will check the DynamoDB metadata cache to see if the local cache is still valid. | ||
* Lastly it will check the S3 cache. | ||
*/ | ||
const multiTierCache: IncrementalCache = { | ||
name: "multi-tier-ddb-s3", | ||
async get(key, isFetch) { | ||
// First we check the local cache | ||
const localCacheEntry = localCache.get(key); | ||
if (localCacheEntry) { | ||
if (Date.now() - localCacheEntry.lastModified < localCacheTTL) { | ||
debug("Using local cache without checking ddb"); | ||
return localCacheEntry; | ||
} | ||
try { | ||
// Here we'll check ddb metadata to see if the local cache is still valid | ||
const { CACHE_DYNAMO_TABLE } = process.env; | ||
const result = await awsFetch( | ||
JSON.stringify({ | ||
TableName: CACHE_DYNAMO_TABLE, | ||
Key: { | ||
path: { S: buildDynamoKey(key) }, | ||
tag: { S: buildDynamoKey(key) }, | ||
}, | ||
}), | ||
); | ||
if (result.status === 200) { | ||
const data = await result.json(); | ||
const hasBeenDeleted = data.Item?.deleted?.BOOL; | ||
if (hasBeenDeleted) { | ||
localCache.delete(key); | ||
return { value: undefined, lastModified: 0 }; | ||
} | ||
// If the metadata is older than the local cache, we can use the local cache | ||
// If it's not found we assume that no write has been done yet and we can use the local cache | ||
const lastModified = data.Item?.revalidatedAt?.N | ||
? Number.parseInt(data.Item.revalidatedAt.N, 10) | ||
: 0; | ||
if (lastModified <= localCacheEntry.lastModified) { | ||
debug("Using local cache after checking ddb"); | ||
return localCacheEntry; | ||
} | ||
} | ||
} catch (e) { | ||
debug("Failed to get metadata from ddb", e); | ||
} | ||
} | ||
const result = await S3Cache.get(key, isFetch); | ||
if (result.value) { | ||
localCache.set(key, { | ||
value: result.value, | ||
lastModified: result.lastModified ?? Date.now(), | ||
}); | ||
} | ||
return result; | ||
}, | ||
|
||
// Both for set and delete we choose to do the write to S3 first and then to DynamoDB | ||
// Which means that if it fails in DynamoDB, instance that don't have local cache will work as expected. | ||
// But instance that have local cache will have a stale cache until the next working set or delete. | ||
async set(key, value, isFetch) { | ||
const revalidatedAt = Date.now(); | ||
await S3Cache.set(key, value, isFetch); | ||
await awsFetch( | ||
JSON.stringify({ | ||
TableName: process.env.CACHE_DYNAMO_TABLE, | ||
Item: { | ||
tag: { S: buildDynamoKey(key) }, | ||
path: { S: buildDynamoKey(key) }, | ||
revalidatedAt: { N: String(revalidatedAt) }, | ||
}, | ||
}), | ||
"set", | ||
); | ||
localCache.set(key, { | ||
value, | ||
lastModified: revalidatedAt, | ||
}); | ||
}, | ||
async delete(key) { | ||
await S3Cache.delete(key); | ||
conico974 marked this conversation as resolved.
Show resolved
Hide resolved
|
||
await awsFetch( | ||
JSON.stringify({ | ||
TableName: process.env.CACHE_DYNAMO_TABLE, | ||
Item: { | ||
tag: { S: buildDynamoKey(key) }, | ||
path: { S: buildDynamoKey(key) }, | ||
deleted: { BOOL: true }, | ||
}, | ||
}), | ||
"set", | ||
); | ||
localCache.delete(key); | ||
}, | ||
}; | ||
|
||
export default multiTierCache; |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,30 @@ | ||
export class LRUCache<T> { | ||
private cache: Map<string, T> = new Map(); | ||
|
||
constructor(private maxSize: number) {} | ||
|
||
get(key: string) { | ||
const result = this.cache.get(key); | ||
conico974 marked this conversation as resolved.
Show resolved
Hide resolved
|
||
// We could have used .has to allow for nullish value to be stored but we don't need that right now | ||
if (result) { | ||
// By removing and setting the key again we ensure it's the most recently used | ||
this.cache.delete(key); | ||
conico974 marked this conversation as resolved.
Show resolved
Hide resolved
|
||
this.cache.set(key, result); | ||
} | ||
return result; | ||
} | ||
|
||
set(key: string, value: any) { | ||
if (this.cache.size >= this.maxSize) { | ||
const firstKey = this.cache.keys().next().value; | ||
if (firstKey !== undefined) { | ||
this.cache.delete(firstKey); | ||
} | ||
} | ||
this.cache.set(key, value); | ||
} | ||
|
||
delete(key: string) { | ||
this.cache.delete(key); | ||
} | ||
} |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
use
Promise.allSetlled()
to parallelize?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is on purpose actually, given how it works we have 3 choice for handling write error failure:
allSettled
but then the behavior will be unpredictable in case one of the 2 failsI should have added a comment explaining this. One other thing we could do is to let the user chose the behavior they'd want.
I'll update and merge the PR tomorrow in case we should chose another option
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh you're right 👍
Thanks for explaining very clearly