diff --git a/content/authors.en.md b/content/authors.en.md index 0984ad06..4dfc92bd 100644 --- a/content/authors.en.md +++ b/content/authors.en.md @@ -14,11 +14,15 @@ weight: 100 1. Daniel Yoder ([danielsyoder](https://github.com/danielsyoder)) - The brains behind amazon-dynamodb-labs.com and the co-creator of the design scenarios ### 2024 additions -The Generative AI workshop LBED was released in 2024: +The Generative AI workshop LBED was released in early 2024: 1. John Terhune - ([@terhunej](https://github.com/terhunej)) - Primary author 1. Zhang Xin - ([@SEZ9](https://github.com/SEZ9)) - Content contributor and original author of a lab that John used as the basis of LBED 1. Sean Shriver - ([@switch180](https://github.com/switch180)) - Editor, tech reviewer, and merger +The LSQL relational migration lab was released in late 2024: +1. Robert McCauley - ([robm26](https://github.com/robm26)) - Primary author +1. Sean Shriver - ([@switch180](https://github.com/switch180)) - Editor, tech reviewer, and merger + ### 2023 additions The serverless event driven architecture lab was added in 2023: diff --git a/content/change-data-capture/index.en.md b/content/change-data-capture/index.en.md index 409b4ff9..2ce11572 100644 --- a/content/change-data-capture/index.en.md +++ b/content/change-data-capture/index.en.md @@ -2,7 +2,7 @@ title: "LCDC: Change Data Capture for Amazon DynamoDB" chapter: true description: "200 level: Hands-on exercises with DynamoDB Streams and Kinesis Data Streams with Kinesis Analytics." -weight: 40 +weight: 80 --- In this workshop, you will learn how to perform change data capture of item level changes on DynamoDB tables using [Amazon DynamoDB Streams](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html) and [Amazon Kinesis Data Streams](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/kds.html). This technique allows you to develop event-driven solutions that are initiated by alterations made to item-level data stored in DynamoDB. diff --git a/content/index.en.md b/content/index.en.md index 99bcc9c8..1861c478 100644 --- a/content/index.en.md +++ b/content/index.en.md @@ -16,7 +16,8 @@ Prior expertise with AWS and NoSQL databases is beneficial but not required to c If you're brand new to DynamoDB with no experience, you may want to begin with *Hands-on Labs for Amazon DynamoDB*. If you want to learn the design patterns for DynamoDB, check out *Advanced Design Patterns for DynamoDB* and the *Design Challenges* scenarios. ### Looking for a larger challenge? -The DynamoDB Immersion Day has a series of workshops designed to cover advanced topics. If you want to dig deep into streaming aggregations with AWS Lambda and DynamoDB Streams, consider LEDA. Or if you want an easier introduction CDC you can consider LCDC. +The DynamoDB Immersion Day has a series of workshops designed to cover advanced topics. If you want to dig deep into streaming aggregations with AWS Lambda and DynamoDB Streams, consider LEDA. Or if you want an easier introduction CDC you can consider LCDC. Do you have a relational database to migrate to DynamoDB? We offer LSQL and a AWS DMS lab LDMS: we highly recommend LSQL unless you have a need to use DMS. + Do you want to integrate Generative AI to create a context-aware reasoning application? If so consider LBED, a lab that takes a product catalog from DynamoDB and contiously indexes it into OpenSearch Service for natural language queries supported by Amazon Bedrock. Dive into the content: diff --git a/content/hands-on-labs/rdbms-migration/index.en.md b/content/rdbms-migration/index.en.md similarity index 89% rename from content/hands-on-labs/rdbms-migration/index.en.md rename to content/rdbms-migration/index.en.md index 26f1f9a0..dfff2106 100644 --- a/content/hands-on-labs/rdbms-migration/index.en.md +++ b/content/rdbms-migration/index.en.md @@ -1,10 +1,10 @@ --- -title: "5. LMIG: Relational Modeling & Migration" +title: "LDMS: AWS DMS Migration" date: 2021-04-25T07:33:04-05:00 weight: 50 --- -In this module, also classified as LMIG, you will learn how to design a target data model in DynamoDB for highly normalized relational data in a relational database. +In this module, classified as LDMS, you will learn how to design a target data model in DynamoDB for highly normalized relational data in a relational database. The exercise also guides a step by step migration of an IMDb dataset from a self-managed MySQL database instance on EC2 to a fully managed key-value pair database Amazon DynamoDB. At the end of this lesson, you should feel confident in your ability to design and migrate an existing relational database to Amazon DynamoDB. diff --git a/content/hands-on-labs/rdbms-migration/migration-chapter00.en.md b/content/rdbms-migration/migration-chapter00.en.md similarity index 100% rename from content/hands-on-labs/rdbms-migration/migration-chapter00.en.md rename to content/rdbms-migration/migration-chapter00.en.md diff --git a/content/hands-on-labs/rdbms-migration/migration-chapter02-1.en.md b/content/rdbms-migration/migration-chapter02-1.en.md similarity index 100% rename from content/hands-on-labs/rdbms-migration/migration-chapter02-1.en.md rename to content/rdbms-migration/migration-chapter02-1.en.md diff --git a/content/hands-on-labs/rdbms-migration/migration-chapter02.en.md b/content/rdbms-migration/migration-chapter02.en.md similarity index 100% rename from content/hands-on-labs/rdbms-migration/migration-chapter02.en.md rename to content/rdbms-migration/migration-chapter02.en.md diff --git a/content/hands-on-labs/rdbms-migration/migration-chapter03.en.md b/content/rdbms-migration/migration-chapter03.en.md similarity index 100% rename from content/hands-on-labs/rdbms-migration/migration-chapter03.en.md rename to content/rdbms-migration/migration-chapter03.en.md diff --git a/content/hands-on-labs/rdbms-migration/migration-chapter04.en.md b/content/rdbms-migration/migration-chapter04.en.md similarity index 100% rename from content/hands-on-labs/rdbms-migration/migration-chapter04.en.md rename to content/rdbms-migration/migration-chapter04.en.md diff --git a/content/hands-on-labs/rdbms-migration/migration-chapter05.en.md b/content/rdbms-migration/migration-chapter05.en.md similarity index 100% rename from content/hands-on-labs/rdbms-migration/migration-chapter05.en.md rename to content/rdbms-migration/migration-chapter05.en.md diff --git a/content/hands-on-labs/rdbms-migration/migration-chapter06.en.md b/content/rdbms-migration/migration-chapter06.en.md similarity index 100% rename from content/hands-on-labs/rdbms-migration/migration-chapter06.en.md rename to content/rdbms-migration/migration-chapter06.en.md diff --git a/content/relational-migration/application refactoring/index.en.md b/content/relational-migration/application refactoring/index.en.md new file mode 100644 index 00000000..71188b31 --- /dev/null +++ b/content/relational-migration/application refactoring/index.en.md @@ -0,0 +1,30 @@ +--- +title : "Application Refactoring" +weight : 40 +--- + +## Updating the Client Application for DynamoDB +After you have chosen your DynamoDB table schema, and migrated any historical data over, +you can consider what code changes are required so a new version of your app can call the DynamoDB +read and write APIs. + +The web app we have been using includes forms and buttons to perform standard CRUD (Create, Read, Update, Delete) operations. + +The web app makes HTTP calls to the published API using standard GET and POST methods against certain API paths. + +1. In Cloud9, open the left nav and locate the file **app.py** +2. Double click to open and review this file + +In the bottom half of the file you will see several small handler functions that +pass core read and write requests on to the **db** object's functions. + + +Notice the file contains a conditional import for the **db** object. + +```python +if migration_stage == 'relational': + from chalicelib import mysql_calls as db +else: + from chalicelib import dynamodb_calls as db +``` + diff --git a/content/relational-migration/application refactoring/index2.en.md b/content/relational-migration/application refactoring/index2.en.md new file mode 100644 index 00000000..46d7c383 --- /dev/null +++ b/content/relational-migration/application refactoring/index2.en.md @@ -0,0 +1,32 @@ +--- +title : "DynamoDB-ready middle tier" +weight : 41 +--- + +## Deploy a new DynamoDB-ready API + +If you recall, we had run the command ```chalice deploy --stage relational``` previously +to create the MySQL-ready middle tier. + +We can repeat this to create a new API Gateway and Lambda stack, this time using the DynamoDB stage. + +1. Within the Cloud9 terminal window, run: +```bash +chalice deploy --stage dynamodb +``` +2. When this completes, find the new Rest API URL and copy it. +3. You can paste this into a new browser tab to test it. You should see a status message indicating +the DynamoDB version of the API is working. + +We now need a separate browser to test out the full web app experience, since +the original browser has a cookie set to the relational Rest API. + +4. If you have multiple browsers on your laptop, such as Edge, Firefox, or Safari, +open a different browser and navigate to the web app: + +[https://amazon-dynamodb-labs.com/static/relational-migration/web/index.html](https://amazon-dynamodb-labs.com/static/relational-migration/web/index.html). + +(You can also open the same browser in Incognito Mode for this step.) + +5. Click the Target API button and paste in the new Rest API URL. +6. Notice the title of the page has updated to **DynamoDB App** in a blue color. If it isn't blue, you can refresh the page and see the color change. diff --git a/content/relational-migration/application refactoring/index3.en.md b/content/relational-migration/application refactoring/index3.en.md new file mode 100644 index 00000000..7e31c223 --- /dev/null +++ b/content/relational-migration/application refactoring/index3.en.md @@ -0,0 +1,36 @@ +--- +title : "Testing and reviewing DynamoDB code" +weight : 42 +--- + +## Test drive your DynamoDB application + +1. Click Tables to see a list of available tables in the account. You should see the +Customers table, vCustOrders table, and a few other tables used by separate workshops. + +3. Click on the Customers table, click the SCAN button to see the table's data. +4. Test the CRUD operations such as get-item, and the update and delete buttons in the data grid, +to make sure they work against the DynamoDB table. +4. Click on the Querying tab to display the form with GSI indexes listed. +5. On the idx_region GSI, enter 'North' and press GO. + +![DynamoDB GSI Form](/static/images/relational-migration/ddb_gsi.png) + +## Updating DynamoDB functions + +Let's make a small code change to demonstrate the process to customize the DynamoDB functions. + +6. In Cloud9, left nav, locate the chalicelib folder and open it. +7. Locate and open the file dynamodb_calls.py +8. Search for the text ```get_request['ConsistentRead'] = False``` +9. Update this from False to True and click File/Save to save your work. +10. In the terminal prompt, redeploy: + +```bash +chalice deploy --stage dynamodb +``` + +11. Return to the web app, click on the Customers table, and enter cust_id value "0001" and click the GET ITEM button. +12. Verify a record was retrieved for you. This record was found using a strongly consistent read. +13. Feel free to extend the DynamoDB code to add new functions or modify existing ones. + diff --git a/content/relational-migration/data migration/index.en.md b/content/relational-migration/data migration/index.en.md new file mode 100644 index 00000000..1a4077ae --- /dev/null +++ b/content/relational-migration/data migration/index.en.md @@ -0,0 +1,20 @@ +--- +title : "Data Migration" +weight : 30 +--- + +## Transform, Extract, Convert, Stage Import + +Recall our strategy for migrating table data into DynamoDB via S3 was +summarized in the :link[Workshop Introduction]{href="../introduction/index5" target=_blank}. + +For each table or view that we want to migrate, we need a routine that will ```SELECT *``` from it, +and convert the result dataset into DynamoDB JSON before writing it to an S3 bucket. + +![Migration Flow](/static/images/relational-migration/migrate_flow.png) + +For migrations of very large tables we may choose to use purpose-built data tools like +AWS Glue, Amazon EMR, or Amazon DMS. These tools can help you define and coordinate multiple +parallel jobs that perform the work to extract, transform, and stage data into S3. + +In this workshop we can use a Python script to demonstrate this ETL process. diff --git a/content/relational-migration/data migration/index2.en.md b/content/relational-migration/data migration/index2.en.md new file mode 100644 index 00000000..ca66d86c --- /dev/null +++ b/content/relational-migration/data migration/index2.en.md @@ -0,0 +1,36 @@ +--- +title : "ETL Scripts" +weight : 31 +--- + + +## mysql_s3.py + +A script called mysql_s3.py is provided that performs all the work to convert and load a query result +set into S3. We can run this script in preview mode by using the "stdout" parameter. + +1. Run: +```bash +python3 mysql_s3.py Customers stdout +``` +You should see results in DynamoDB JSON format: + +![mysql_s3.py output](/static/images/relational-migration/mysql_s3_output.png) + +2. Next, run it for our view: +```bash +python3 mysql_s3.py vCustOrders stdout +``` +You should see similar output from the view results. + +The script can write these to S3 for us. We just need to omit the "stdout" command line parameter. + +3. Now, run the script without preview mode: +```bash +python3 mysql_s3.py Customers +``` +You should see confirmation that objects have been written to S3: + +![mysql_s3.py output](/static/images/relational-migration/mysql_s3_write_output.png) + + diff --git a/content/relational-migration/data migration/index3.en.md b/content/relational-migration/data migration/index3.en.md new file mode 100644 index 00000000..88a57bbf --- /dev/null +++ b/content/relational-migration/data migration/index3.en.md @@ -0,0 +1,54 @@ +--- +title : "Full Migration" +weight : 32 +--- + +## DynamoDB Import from S3 + +The Import from S3 feature is a convenient way to have data loaded into a new DynamoDB table. +Learn more about this feature [here](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/S3DataImport.HowItWorks.html). + +Import creates a brand new table, and is not able to load data into an existing table. +Therefore, it is most useful during the one-time initial load of data during a migration. + +## migrate.sh + +A script is provided that performs multiple steps to coordinate a migration: +* Runs **mysql_desc_ddb.py** and stores the result in a table definition JSON file +* Runs **mysql_s3.py** to extract, transform, and load data into an S3 bucket +* Uses the **aws dynamodb import-table** CLI command to request a new table, by providing the bucket name and table definition JSON file + +1. Run: +```bash +./migrate.sh Customers +``` +The script should produce output as shown here: + +![Migrate Output](/static/images/relational-migration/migrate_output.png) + +Notice the ARN returned. This is the ARN of the Import job, not the new DynamoDB table. + +The import will take a few minutes to complete. + +2. Optional: You can check the status of an import job using this command, by setting the Import ARN on line two. + +```bash +aws dynamodb describe-import \ + --import-arn '' \ + --output json --query '{"Status ":ImportTableDescription.ImportStatus, "FailureCode ":ImportTableDescription.FailureCode, "FailureMessage ":ImportTableDescription.FailureMessage }' +``` + +We can also check the import status within the AWS Console. + +3. Click into the separate browser tab titled "AWS Cloud9" to open the AWS Console. +4. In the search box, type DynamoDB to visit the DyanmoDB console. +5. From the left nav, click Imports from S3. +6. Notice your import is listed along with the current status. + ![Import from S3](/static/images/relational-migration/import-from-s3.png) +7. Once the import has completed, you can click it to see a summary including item count and the size of the import. +8. On the left nav, click to Tables. +9. In the list of tables, click on the Customers table. +10. On the top right, click on Explore Table Items. +11. Scroll down until you see a grid with your imported data. + +Congratulations! You have completed a relational-to-DynamoDB migration. diff --git a/content/relational-migration/data migration/index4.en.md b/content/relational-migration/data migration/index4.en.md new file mode 100644 index 00000000..e5f6bf93 --- /dev/null +++ b/content/relational-migration/data migration/index4.en.md @@ -0,0 +1,36 @@ +--- +title : "VIEW migration" +weight : 33 +--- + +## Migrating from a VIEW + +In the previous step, you simply ran ```./migrate.sh Customers``` to perform a migration of this table +and data to DynamoDB. + +You can repeat this process to migrate the custom view vCustOrders. + +1. Run: +```bash +./migrate.sh vCustOrders +``` + +The script assumes you want a two-part primary key of Partition Key and Sort Key, found in the two leading columns. + +In case you want a Partition Key only table, you could specify this like so. + +```bash +./migrate.sh vCustOrders 1 +``` + +But don't run this command, because if you do, the S3 Import will fail as you already have a table called vCustOrders. +You could create another view with a different name and import that, or just delete the DynamoDB table +from the DynamoDB console before attempting another migration of vCustOrders. + +However, this is not advisable since this particular dataset is not unique by just the first column. + +![View output](/static/images/relational-migration/view_result.png) + +::alert[Import will write all the records it finds in the bucket to the table. If a duplicate record is encountered, it will simply overwrite it. Please be sure that your S3 data does not contain any duplicates based on the Key(s) of the new table you define.]{header="Note:"} + +The second import is also not advisable since if you created a new vCustOrders table in step 1, the second Import attempt would not be able to replace the existing table, and would fail. diff --git a/content/relational-migration/data migration/index5.en.md b/content/relational-migration/data migration/index5.en.md new file mode 100644 index 00000000..d9571eda --- /dev/null +++ b/content/relational-migration/data migration/index5.en.md @@ -0,0 +1,30 @@ +--- +title : "SQL Transformation Patterns for DynamoDB" +weight : 34 +--- + +## Shaping Data with SQL + +Let's return to the web app and explore some techniques you can use to shape and enrich your relational +data before importing it to DynamoDB. + +1. Within the web app, refresh the browser page. +2. Click on the Querying tab. +3. Notice the set of SQL Sample buttons below the SQL editor. +4. Click button one. +The OrderLines table has a two-part primary key, as is common with DynamoDB. We can think of the returned dataset as an Item Collection. +5. Repeat by clicking each other sample buttons. Check the comment at the top of each query, which summarizes the technique being shown. + +![SQL Samples](/static/images/relational-migration/sparse.png) + +Notice the final two sample buttons. These demonstrate alternate ways to combine data from multiple tables. +We already saw how to combine tables with a JOIN operator, resulting in a denormalized data set. + +The final button shows a different approach to combining tables, without using JOIN. +You can use a UNION ALL between multiple SQL queries to stack datasets together as one. +When we arrange table data like this, we describe each source table as an entity and so the single DynamoDB +table will be overloaded with multiple entities. Because of this, we can set the partition key and sort key +names to generic values of PK and SK, and add some decoration to the key values so that it's clear what type +of entity a given record represents. + +![Stacked entities](/static/images/relational-migration/stacked.png) \ No newline at end of file diff --git a/content/relational-migration/data migration/index6.en.md b/content/relational-migration/data migration/index6.en.md new file mode 100644 index 00000000..11a68202 --- /dev/null +++ b/content/relational-migration/data migration/index6.en.md @@ -0,0 +1,20 @@ +--- +title : "Custom VIEWs" +weight : 35 +--- + +## Challenge: Create New Views + +The SQL editor window is provided so that you have an easy way to run queries and +experiment with data transformation techniques. + +Using the sample queries as a guide, see how many techniques you can combine in a single query. +Look for opportunities to align attributes across the table so that they can be queried by a GSI. +Consider using date fields in column two, so that they become Sort Key values, and be queryable with +DynamoDB range queries. + +1. When you have a SQL statement you like, click the CREATE VIEW button. +2. In the prompt, enter a name for your new view. This will add a CREATE VIEW statement to the top of you query. +3. Click RUN SQL to create the new view. +4. Refresh the page, and your view should appear as a button next to the vCustOrders button. + diff --git a/content/relational-migration/index.en.md b/content/relational-migration/index.en.md new file mode 100644 index 00000000..7b35a2da --- /dev/null +++ b/content/relational-migration/index.en.md @@ -0,0 +1,67 @@ +--- +title: "LSQL: Relational Migration to DynamoDB" +weight: 35 +--- + +![Relational Migration](/static/images/relational-migration/frontpage.png) + +Developers are choosing to migrate relational database applications to DynamoDB +to take advantage of DynamoDB's serverless scalable architecture, +predictable low latency performance, high availability and durability, and low maintenance. + +However, migrating a relational database application like MySQL onto a NoSQL database like Amazon DynamoDB +requires careful planning to achieve a successful outcome. + +This workshop will give you hands-on experience and tools to evaluate existing relational tables, +define logic to transform and shape relational data, +and build migration jobs to extract, stage, and load data into DynamoDB. + +A reference application is provided that reads and writes to a relational database. +A new version of the app that uses DynamoDB instead is included, +to highlight the code updates required to use the DynamoDB API. + + +### Relational Migration to DynamoDB Guide +A robust set of documentation has been published on strategies for migrating relational databases to DynamoDB, +which can be found at the link below. This workshop compliments the guide and provides hand-on practice implementing +many of the approaches discussed within. +You can review this guidance when considering and planning your own relational migration to DynamoDB. + +[AWS Documentation: Relational Migration to DynamoDB Developer Guide](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/migration-guide.html) + + +## Workshop overview + +The workshop provides a MySQL instance running on EC2, a Cloud9 developer workstation, +and an S3 bucket for staging data. + +You will create a serverless API and Python Lambda function that +performs database read and write operations against the relational database, +and then deploy a new version that performs read and write operations against DynamoDB. + +A sample web app is provided that: +* Acts as a GUI test harness for the serverless API +* Converts tables and indexes into suggested DynamoDB tables and GSIs +* Has a SQL editor with a set of sample queries, and hints on how to combine tables +* Performs read and write operations to both MySQL and DynamoDB +and provides hints and suggestions for building a migration + +You will run a set of scripts that: +* Deploys a set of sample MySQL tables, views, and data. +* Converts MySQL table metadata to a DynamoDB table definition. +* Converts the results of a SQL query into DynamoDB JSON format, and stores in the Amazon S3 bucket. +* Perform a full migration by running a SQL query, transforming results to DynamoDB JSON, writing to Amazon S3, then starting a DynamoDB Import job. + +Developer challenge: Run the provided SQL samples showing data modeling techniques, +then apply them to create a new VIEW and use this to perform a custom import. + +Developer challenge: Write a new set of data access functions that point to DynamoDB. + +### Requirements +This workshop is designed to run in an immersion day on Workshop Studio in an AWS-provided environment that includes a MySQL database on EC2. It cannot be run in your own AWS account, however the code is all open source. + +### Technical Depth +This workshop is a L300 level workshop. Having SQL, Python, and Bash skills will help but are not required. +### Code Project +Attendees will use scripts and tools from the /workshops/relational-migration folder of the +[github.com/aws-samples/aws-dynamodb-examples](https://github.com/aws-samples/aws-dynamodb-examples/) repository. diff --git a/content/relational-migration/introduction/img.png b/content/relational-migration/introduction/img.png new file mode 100644 index 00000000..7142731c Binary files /dev/null and b/content/relational-migration/introduction/img.png differ diff --git a/content/relational-migration/introduction/index.en.md b/content/relational-migration/introduction/index.en.md new file mode 100644 index 00000000..4b7aeb4b --- /dev/null +++ b/content/relational-migration/introduction/index.en.md @@ -0,0 +1,40 @@ +--- +title : "Introduction" +weight : 10 +--- + +# Motivations for Migrating to DynamoDB + +![Rationales](/static/images/relational-migration/rationales.png) + +When designing a DynamoDB solution, you will need to make several decisions on +how best to leverage DynamoDB's unique feature set. These decisions +may involve trade-offs between competing benefits. +In a classic example, consider a table that has no secondary indexes, +and consider the same table, but with three different indexes to support +various search patterns. The first table scores high on the dimensions of +Low Cost and Simplicity, but may perform poorly if the table is large and +queries need to perform a full table scan to execute a search. + +Another trade-off comes with deciding how many tables will be required in +your DynamoDB schema. Without JOIN operators, you may need to make multiple calls to +read from separate tables when retrieving data. This table schema may match the +existing relational schema, greatly simplifying and streamlining the migration process, +but at the expense of more complex and potentially slower read operations. +With DynamoDB, you can choose to transform existing tables' data into +single-table or single-item format to bring related data close together, +adding some complexity to the write process, but also unlocking the ability +to do fast single-digit millisecond read operations. + +As you learn the features of DynamoDB and plan your migration, keep the starting +motivations in mind so that you can make the best choices to satisfy your most +important requirements. + +## Additional considerations +To further maximize the benefits of DynamoDB, consider the questions below and +document your answers, for future use. + +* What is the maximum write velocity and read velocity (per second) now and in the future? +* For any large, growing tables: How long will records live before they are safe to delete or archive? + + diff --git a/content/relational-migration/introduction/index2.en.md b/content/relational-migration/introduction/index2.en.md new file mode 100644 index 00000000..218368fa --- /dev/null +++ b/content/relational-migration/introduction/index2.en.md @@ -0,0 +1,15 @@ +--- +title : "Migration Phases" +weight : 11 +--- + +## Three phases +You can tackle a migration project by breaking it into three distinct phases. + +1. **Schema Refactoring**: Defining new tables and indexes to replace existing ones +2. **Data Migration**: Transforming and moving historical data into DynamoDB tables, with minimal downtime +3. **App Refactoring**: Replacing SQL calls with DynamoDB read and write calls + +The workshop will cover each challenge in turn. + +![Migration Phases](/static/images/relational-migration/phases.png) diff --git a/content/relational-migration/introduction/index3.en.md b/content/relational-migration/introduction/index3.en.md new file mode 100644 index 00000000..e4f101b8 --- /dev/null +++ b/content/relational-migration/introduction/index3.en.md @@ -0,0 +1,31 @@ +--- +title : "Scope and Downtime" +weight : 12 +--- + +## Migration Scope + +A large relational database application may span a hundred or more tables and support several +different application functions. When approaching a large migration, consider breaking your +application into smaller components or microservices, and migrating a small set of tables at a time. +This workshop involves migrating only a few tables to support a particular application function. + +--- + +## Offline Migration +If your application can tolerate some downtime during the migration, it will simplify the migration process. +You can keep the relational application in read-only mode to allow for partial availability during the migration window. + +* *In this workshop, we will focus on Offline Migrations.* +* *We will use the [DynamoDB Import from S3 feature](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/S3DataImport.HowItWorks.html) to populate new tables from staged data.* + +## Hybrid Migration +You might allow users to perform both reads and inserts, but not updates and deletes, during a migration. +The application could be modified to perform dual-writes to both the relational and DynamoDB database, +while a separate job performs a backfill of all historical records into DynamoDB. + +## Online Migration +Applications that require zero downtime during migration are more difficult migrate, +and can require significant planning and custom development. +One key decision is to estimate and weigh the costs of building a custom migration process +versus the cost to the business of having a downtime window during the cutover. diff --git a/content/relational-migration/introduction/index4.en.md b/content/relational-migration/introduction/index4.en.md new file mode 100644 index 00000000..47bfb013 --- /dev/null +++ b/content/relational-migration/introduction/index4.en.md @@ -0,0 +1,34 @@ +--- +title : "How Much Transformation is needed?" +weight : 13 +--- + +### Could you just migrate existing tables as they are, one-for-one? + +![One for One table migration](/static/images/relational-migration/oneforone.png) + +Consider the question in this section header. Is this even a good idea? This question generates a great deal of debate and interest in the data modeling community. + +The DynamoDB **single table philosophy** is to store different types of records together in a single table in DynamoDB. + +Here are some pros and cons of migrating relational tables directly into DynamoDB tables. + + +| Schema Transformation Option | Benefits / Drawbacks | +|------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------| +| Copy tables 1-for-1 directly into DynamoDB | Easier to migrate
Harder in DynamoDB to fetch related data from multiple tables | +| Transform Schema for NoSQL:
    Denormalized
    Single-Table
    Item Collecions | Easier to keep related data together for low-latency reads
Harder to update any duplicated, denormalized data
Harder to achieve zero-downtime migration | + + +## Combining tables using a SQL VIEW + +You could use a SQL VIEW to perform a custom transformation of data from multiple tables. +The view will need the leading one or two columns to provide a unique primary key, to identify each row. +The SQL language provides several powerful features you can use to combine, reshape, format, calculate source data +into a custom dataset ready for import to DynamoDB, which we will explore in this workshop. + +![Single Table Transformation with VIEW](/static/images/relational-migration/singletableview.png) + + + + diff --git a/content/relational-migration/introduction/index5.en.md b/content/relational-migration/introduction/index5.en.md new file mode 100644 index 00000000..f43b0899 --- /dev/null +++ b/content/relational-migration/introduction/index5.en.md @@ -0,0 +1,30 @@ +--- +title : "Transform, Extract, Convert, Stage, Import" +weight : 14 +--- + +## Staging Data for DynamoDB Import + + +![Extract](/static/images/relational-migration/extract.png) + +Once the data is fully staged in S3, we can then request a +[DynamoDB Import from S3](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/S3DataImport.HowItWorks.html), +which will create a new table and load the S3 data for us. +This import is fully managed by DynamoDB, saving us the trouble of creating and running a data load job, +and is priced to be much less expensive than the cost of consuming DynamoDB WCU units directly in a load job. + + +The Import from S3 feature requires a table definition for the new table to be created. +The table definition includes details about the table including: +* Table Name +* Partition Key name and type +* Sort Key name and type (optional) +* Global Secondary Index (GSI) definitions (optional) + +[Global secondary indexes](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GSI.html) (GSIs) are optional, but the decision of whether and how to add these indexes to our table is an important one. +Done right, a GSI will unlock efficient new search capabilities on our DynamoDB table, but would increase the cost +of a write-heavy workload. We will learn how to automate or customize the index definitions created +during the Import process. + +![Import from S3](/static/images/relational-migration/import.png) diff --git a/content/relational-migration/scenario/index.en.md b/content/relational-migration/scenario/index.en.md new file mode 100644 index 00000000..0ef8c6d4 --- /dev/null +++ b/content/relational-migration/scenario/index.en.md @@ -0,0 +1,34 @@ +--- +title : "Business Scenario" +weight : 20 +--- +## Business Scenario - Vehicle Sales + +Imagine we are hired as consultants to support the sales order application +for a company that sells a range of vehicles, including cars, motorcycles, +and helicopters. +The vehicle customers are various taxi and transport companies, +who contact one of our sales representatives to place new orders. + + +## Database Schema +The company tracks five types of records in their MySQL Database. +Each entity is stored in a separate table: + +* Customers +* Orders +* OrderLines +* Products +* Reps + +Foreign key constraints between these tables define **one-to-many** relationships. +For example: +* Each order is tied to one customer, while one customer might have + multiple orders. +* Each order line is tied to one order (the order header), while one order might have multiple order line items. + +The Entity Relationship Diagram (ERD) for our database schema is as follows: + +![Relational Application Schema](/static/images/relational-migration/relational_schema.png) + + diff --git a/content/relational-migration/scenario/index2.en.md b/content/relational-migration/scenario/index2.en.md new file mode 100644 index 00000000..3f6d77d2 --- /dev/null +++ b/content/relational-migration/scenario/index2.en.md @@ -0,0 +1,33 @@ +--- +title : "Project Requirements" +weight : 21 +--- + +## Plan a migration to DynamoDB +The business has evaluated DynamoDB and likes the features and performance it offers. +Our mission is to manage a project to assess, plan, and build a migration of the existing +Sales Orders application from the relational database onto DynamoDB. + +### Explore approaches + +We have been given time to explore different potential approaches to the migration. + +A goal of the project is to learn how to migrate tables using DynamoDB's Single Table pattern, +while planning an alternate, traditional 1-for-1 table migration in case it seems to be a better approach. + +### Leverage skills of IT staff + +We have been asked to write any data transformation logic in SQL, so the existing IT Database staff +will feel comfortable performing future migrations. + +The existing application code is modular, so that operations like +"list products", "get customer" and "delete order line" are defined and encapsulated +in data access functions within the Python code base. +This should make it somewhat easy to find these read and write functions and then +swap out the SQL operations for DynamoDB API calls. If we do it right, the bulk of the application +code will not need to be changed. + +But, we can't begin to make these changes until we have +all agreed on what the new DynamoDB table schema and data formats will look like. + + diff --git a/content/relational-migration/schema refactoring/index.en.md b/content/relational-migration/schema refactoring/index.en.md new file mode 100644 index 00000000..94fa2b22 --- /dev/null +++ b/content/relational-migration/schema refactoring/index.en.md @@ -0,0 +1,19 @@ +--- +title : "Schema Refactoring" +weight : 22 +--- + +## Discovering Table columns, data types, and indexes + +As consultants, we need to do some discovery and scoping to define what our starting parameters are +for the migration. In particular, we want to find out all we can about existing tables including their: +* Columns and data types +* Primary Key column(s) +* Indexes +* Constraints + +In MySQL, a system schema called INFORMATION_SCHEMA holds the answers to these questions. We could query +this schema and learn what currently exists. We could also use the standard MySQL Workbench tool to do the same. +However, what would be even better would be to get suggestions for how to convert the table metadata +to DynamoDB format. The Chalice API and Web App are designed to help us with this. + diff --git a/content/relational-migration/schema refactoring/index2.en.md b/content/relational-migration/schema refactoring/index2.en.md new file mode 100644 index 00000000..a8655ed9 --- /dev/null +++ b/content/relational-migration/schema refactoring/index2.en.md @@ -0,0 +1,43 @@ +--- +title : "Table Survey" +weight : 23 +--- + +## Review a table + +Returning to the Web App, click on the Tables button. +You should now see a list of the tables in the database. Click on the Customers table. + +![Customers Table](/static/images/relational-migration/customers.png) + +The table has columns with VARCHAR, INT, and DATETIME data types. The Primary Key column, cust_id, is indicated in blue. + +If we were to move this table's data into DynamoDB, we could convert the VARCHAR types into +DynamoDB String (S) format, and the INT into DynamoDB Number (N) format. +However, DynamoDB does not have a native date format. + +Instead, dates are usually written as Strings in ISO 8601 format like this: ```"2025-12-13 09:45:37"``` + +Dates can also be stored as Numbers. The DynamoDB TTL automatic expiration feature requires future +dates to be stored in Epoch number format like this: ```1731934325```. DynamoDB TTL (Time To Live) is a feature that automatically deletes items from a DynamoDB table after a specified time. For more information see [using time to live](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html) in our developer documentation. + +## Convert Table to DynamoDB Table Definition + +The tool provides a routine to generate a DynamoDB table definition based on the +columns and keys from a given relational table. Click the GENERATE button below the table details. + +![Generate Customer Table](/static/images/relational-migration/customers_ddb.png) + +This JSON format can be used to create a table with various automation tools, +such as the [AWS CLI](https://docs.aws.amazon.com/cli/) +**[create-table](https://docs.aws.amazon.com/cli/latest/reference/dynamodb/create-table.html)** command. + + +Notice that there are no details on the last_updated datetime column or any other columns, apart from cust_id. +DynamoDB tables are schema-less, meaning that the developer would indicate the data type +attribute values (columns) only when they write a new record. And, each record could have different attributes, +since the database itself will not enforce any data record convention. + + + + diff --git a/content/relational-migration/schema refactoring/index3.en.md b/content/relational-migration/schema refactoring/index3.en.md new file mode 100644 index 00000000..1c3d7224 --- /dev/null +++ b/content/relational-migration/schema refactoring/index3.en.md @@ -0,0 +1,34 @@ +--- +title : "Table and Index Survey" +weight : 25 +--- + +## Consider what indexes exist on the table + +Within the Web App, notice the two Access Pattern tabs near the top of the page. +Click on the second tab called **Querying** + + +![Tab for Querying](/static/images/relational-migration/querying_tab.png) + + + +You will now see a form listing the Primary Key index, along with other secondary indexes. + +![Customers Table With Indexes Form](/static/images/relational-migration/customers_indexes_ddb_form.png) + + +## Convert Table with Indexes to DynamoDB Table Definition +Now, click on the Generate button below the form. You should see a new version of the DynamoDB +table definition, this time with one Global Secondary Index (GSI) for each relational table index. + +The GSI is a separate data structure that stores your table's data organized by a different primary key, +and can be used to perform efficient queries against a large table. It is similar to a relational database index + +![Customers Table With Indexes](/static/images/relational-migration/customers_indexes_ddb.png) + +Notice that a few more attribute (column) names are defined in the AttributeDefinitions section. +This is because any attributes that are involved in a GSI +definition need to be declared in advance. + + diff --git a/content/relational-migration/schema refactoring/index4.en.md b/content/relational-migration/schema refactoring/index4.en.md new file mode 100644 index 00000000..ed6fce2d --- /dev/null +++ b/content/relational-migration/schema refactoring/index4.en.md @@ -0,0 +1,53 @@ +--- +title : "Generate Table Definition in a script" +weight : 26 +--- + +### Automate the table definition +Reviewing the proposed table definition in the web app is a nice way to review and learn the DynamoDB table format. +However, we also want the ability to run this from a command line, +so that it could be included in a script and automated. + +### mysql_desc_ddb.py +1. In the command prompt, make sure you are still in the project's root folder: ```aws-dynamodb-examples/workshops/relational-migration``` +2. Type ```ls``` to review the scripts available to you. + +3. Run: +```bash +python3 mysql_desc_ddb.py Customers +``` +The script should output a table definition in JSON format, like we saw within the web app. + +3. Next, let's pipe the output to a new file so we can more easily review it: +```bash +python3 mysql_desc_ddb.py Customers > Customers.json +``` + +4. Within the left nav, find ```Customers.json``` and double click to open it in the editor. + +::alert[When migrating to DynamoDB, it is important to consider how many GSI indexes you want. You may feel that your current relational table has too many indexes, or too few, to efficiently process both search requests and write traffic against the table. While you can create a table with the exact same indexes in DynamoDB, consider that you can adjust the number of indexes by adding, removing, or changing the index definitions, by updating this file.] + +For now, we approve the proposed GSI index definitions generated by the script, so no changes are needed. + +### Script Details + +5. In the left nav, double click on the Python script to open it in the editor: ```mysql_desc_ddb.py``` + +This script runs as a client against the local Chalice project code, so it doesn't require +any network access to the API endpoint we deployed earlier. + +At the bottom of the script is the convert_type() function. Here you can see the mappings +we have chosen between MySQL datatypes and the common string (S) and number (N) datatypes in DynamoDB. + +![Data Type Conversions](/static/images/relational-migration/type_conversion.png) + +### Check point + +So far, we have learned that relational tables can be recreated as DynamoDB tables with similar +primary key and index definitions. +For some migrations, a one-for-one direct table migration is ideal. We will perform such a migration later in the Data Migration section of the workshop +when we move a table and existing data into a DynamoDB table. + +But, migrations also can require transformation of multiple tables into a single DynamoDB table. +We will explore this path next. + diff --git a/content/relational-migration/schema refactoring/index5.en.md b/content/relational-migration/schema refactoring/index5.en.md new file mode 100644 index 00000000..cf8443a6 --- /dev/null +++ b/content/relational-migration/schema refactoring/index5.en.md @@ -0,0 +1,76 @@ +--- +title : "Single Table Philosophy" +weight : 27 +--- + +### Schema Consolidation for Single Table +Without any JOIN operator to combine data from multiple tables, developers often store different types +of records all in the same table in DynamoDB. This allows for item collections to emerge. +Item collections are sets of records that have something in common, are stored together, +and can be retrieved quickly and efficiently with +a query operation. A great and thorough overview of Single Table Philosophy is covered in this +[blog post](https://aws.amazon.com/blogs/database/single-table-vs-multi-table-design-in-amazon-dynamodb/). + +Let's assume we wish to combine the Customers and Orders table together with a JOIN to produce a single data set. +The database schema includes constraints that hint at how tables should be combined with a JOIN. + +### Foreign Key constraints as hints + +1. From the web app left nav, click on the Orders table. +2. Click on the Querying tab near the top of the page. +3. Scroll to the bottom of the page and find the button labeled Foreign Key Relationships + +![Foreign Key Relationship](/static/images/relational-migration/foreign_key.png) + +4. Click the "Paste to editor" button to put a sample query into the SQL box. + ::alert[The SQL editor window allows you to build and test SQL queries. Look for query results in a grid at the bottom of the page.] +5. Press the **Run SQL** button to see the results of the query. + +Notice that the leading columns are just the first columns in the Orders table. + +![Foreign Query Result](/static/images/relational-migration/fk_result.png) + +For this workshop, we prefer to have the first column of any dataset represent the new table's Partition Key. +And the second column should be the new table's Sort Key (if a Sort Key is needed). + +A SQL View has been created for you that performs the JOIN but instead returns the first two columns as cust_id and ord_id. +This aligns nicely with a DynamoDB table's two-part primary key. The Partition Key representing a customer can have +one or more records, each with a unique Sort Key value representing an order. + +6. Just below the SQL editor panel, find the button called **vCustOrders** and click it. +7. Review the result data set and notice that cust_id and ord_id are in the leading positions now. + + ![VIEW Result](/static/images/relational-migration/view_result.png) + +If you wish to check the VIEW definition, open the file ```source-tables/create_views.sql``` + +Now, let's generate a DynamoDB table definition based on this view's output. + +8. Run: +```bash +python3 mysql_desc_ddb.py vCustOrders +``` + +The script returns a new table definition based on the name of the view, with the first +column becoming the Partition Key. The script will assume data types of string (S) by default. + +However, cust_id is not unique across the dataset. We want to get a table definition that uses the +first TWO column names as the Partition Key and Sort Key. + +9. Run: +```bash +python3 mysql_desc_ddb.py vCustOrders 2 +``` +Now we can see that the DynamoDB table's Key Schema includes both columns. + +#### Custom GSIs +Unlike the previous table definition we generated, this one will have no GSIs defined since there are no +indexes on a relational VIEW. If you need a GSI, you can manually add it to the table definition, or else +you can request a new GSI at any time on an existing DynamoDB table. + +### Summary +We learned how to generate a full table definition from an existing table, +or a simplified table definition from the leading columns of a VIEW that combines records from multiple tables. + +Next up is the Data Migration section of the workshop. + diff --git a/content/relational-migration/setup/index.en.md b/content/relational-migration/setup/index.en.md new file mode 100644 index 00000000..84cb29b5 --- /dev/null +++ b/content/relational-migration/setup/index.en.md @@ -0,0 +1,21 @@ +--- +title : "Setup" +weight : 15 +--- + +### Starting Environment +Let's get the workshop environment setup, so we can begin planning the migration. + +Here are the resources that have already been deployed for you in your AWS account. +![Starting Resources](/static/images/relational-migration/starting.png) + +This environment may look similar to what you have already in your organization! +You likely have a developer desktop or laptop, +the ability to find and clone the public Github code repository, +and a running MySQL database instance. +You can create a new Amazon S3 bucket quickly, which can be used as a staging area for data to be migrated. + + + + + diff --git a/content/relational-migration/setup/index1.en.md b/content/relational-migration/setup/index1.en.md new file mode 100644 index 00000000..d715fbc7 --- /dev/null +++ b/content/relational-migration/setup/index1.en.md @@ -0,0 +1,73 @@ +--- +title : "Dev Environment" +weight : 16 +--- + +[AWS Cloud9](https://aws.amazon.com/cloud9/) is a cloud-based integrated development environment (IDE) that lets you write, run, and debug code with just a browser. AWS Cloud9 includes a code editor, debugger, and terminal. It also comes prepackaged with essential tools for popular programming languages and the AWS Command Line Interface (CLI) preinstalled so that you don’t have to install files or configure your laptop for this lab. Your AWS Cloud9 environment will have access to the same AWS resources as the user with which you signed in to the AWS Management Console. + +### To set up your AWS Cloud9 development environment: + +1. Choose **Services** at the top of the page, and then choose **Cloud9** under **Developer Tools**. + +2. There would be an environment ready to use under **Your environments**. + +3. Click on **Open IDE**, your IDE should open with a welcome note. + +You should now see your AWS Cloud9 environment. You need to be familiar with the three areas of the AWS Cloud9 console shown in the following screenshot: + +![Cloud9 Environment](/static/images/zetl-cloud9-environment.png) + +- **File explorer**: On the left side of the IDE, the file explorer shows a list of the files in your directory. + +- **File editor**: On the upper right area of the IDE, the file editor is where you view and edit files that you’ve selected in the file explorer. + +- **Terminal**: On the lower right area of the IDE, this is where you run commands to execute code samples. + + +From within the terminal: + +2. Run the command ```aws sts get-caller-identity``` just to verify that your AWS credentials have been properly configured. + +3. Clone the repository containing the Chalice code and migration scripts. Run: + +```bash +cd ~/environment +git clone https://github.com/aws-samples/aws-dynamodb-examples.git +cd aws-dynamodb-examples +git checkout :param{key="lsql_git_commit"} +``` + + +*This checkout command ensures you are using a specific, tested version of the repository* + +```bash +cd workshops/relational-migration +``` + +4. Next, run this to install three components: Boto3 (AWS SDK for Python), Chalice, and the MySQL connector for Python. + +```bash +sudo pip3 install chalice mysql-connector-python +``` + +5. From the left navigation panel, locate our project folder by + clicking into ```aws-dynamodb-examples / workshops / relational-migration``` + +6. Find the gear icon near the top of the left nav panel, and click "show hidden files" . + You may now see a folder called ```.chalice``` under the main **relational-migration** folder. + Within this folder is the ```config.json``` file that holds the MySQL connection details. + A script will automatically update this file in the next step. + +7. Return to the terminal prompt window. Run this file which + uses AWS CLI commands to find the MySQL host's IP address and S3 bucket name, then sets them as + environment variables, while also updating the Chalice config.json file: + +```bash +source ./setenv.sh +``` + +You should see output similar to this: +![setenv.sh settings](/static/images/relational-migration/setenv.png) + +Your developer desktop is now configured for testing and deployment! + diff --git a/content/relational-migration/setup/index2.en.md b/content/relational-migration/setup/index2.en.md new file mode 100644 index 00000000..033c1b74 --- /dev/null +++ b/content/relational-migration/setup/index2.en.md @@ -0,0 +1,58 @@ +--- +title : "Chalice API" +weight : 17 +--- + +### Application Setup + +We will be deploying a new application stack using [AWS Chalice](https://github.com/aws/chalice), +a Python-based serverless framework. + +Chalice deploys the following components: + +* An **AWS Lambda function** to perform database read and write calls +* An **Amazon API Gateway service** with integration to the Lambda function +* Associated roles and permissions + +In this workshop, we will focus on the AWS Lambda source code. + +::alert[Chalice allows us to run unit tests against this code locally, before deploying to Lambda, and further allows for a mock deployment to localhost:8000 in case you wish to launch the web service privately, from your laptop.]{header="Note"} + +The Lambda source code project has been setup as follows +* Entry point : **app.py** +* Read and write function implementations: + * **chalicelib/mysql_calls.py** + * **chalicelib/dynamodb_calls.py** + + +1. Next, let's deploy the Chalice application stack. +```bash +chalice deploy --stage relational +``` + +2. The script will create resources and provide details of the new stack. Notice the Rest API URL value. + This is the public endpoint for the middle-tier service we will use to drive the + sample relational application. +3. Carefully copy the Rest API URL value. +4. Optional: Paste this into a new browser tab to test it. You should see a status message appear. + +--- + +## Single Page Web App +A single-page web application is included in the /webapp project folder. +The web app has already been deployed for you in a public S3 bucket for convenience. + +5. Navigate to [https://amazon-dynamodb-labs.com/static/relational-migration/web/index.html](https://amazon-dynamodb-labs.com/static/relational-migration/web/index.html) + +The webapp stores the API URL you provide as a browser cookie. + Then, Javascript functions will call the API for you, when you click on + various buttons in the app. + +6. Click the "Target API" button and paste in the API you generated in step 1. +7. Click the "Tables" button. A list of tables might appear below the button, however none exist yet. + We will be creating sample tables in the next step. + + + + + diff --git a/content/relational-migration/setup/index3.en.md b/content/relational-migration/setup/index3.en.md new file mode 100644 index 00000000..9d13809b --- /dev/null +++ b/content/relational-migration/setup/index3.en.md @@ -0,0 +1,25 @@ +--- +title : "Relational Tables" +weight : 18 +--- + + +### Create Tables, Views, and Load Data + +For your convenience, a single script can setup and populate the relational database schema. + +1. From the Cloud9 terminal, run + +```bash +./setup_tables.sh +``` + +This will create a set of tables, a SQL view, and fill the tables with sample records. +You can re-run this at any time to reset the starting relational environment. + +### Congratulations! +Your starting environment has been setup and you now have the following components. + +![Relational Application Stack](/static/images/relational-migration/relational-stack.png) + + diff --git a/content/relational-migration/summary/index.en.md b/content/relational-migration/summary/index.en.md new file mode 100644 index 00000000..9af3b68e --- /dev/null +++ b/content/relational-migration/summary/index.en.md @@ -0,0 +1,14 @@ +--- +title : "Summary" +weight : 40 +--- + +Congratulations! You have completed the Relational Migration to DynamoDB workshop. +You learned how to approach a migration project, what features, tools and script are available to help you, +and gained hands-on experience with the components of an end-to-end custom migration job. + +If you are running this event in your own account, be sure to terminate the Cloudformation stack +that launched the workshop, to avoid any unexpected charges. + +You used scripts and tools from the /workshops/relational-migration folder in the +[github.com/aws-samples/aws-dynamodb-examples](https://github.com/aws-samples/aws-dynamodb-examples/) repository. \ No newline at end of file diff --git a/contentspec.yaml b/contentspec.yaml index 19479524..cd6cdd40 100644 --- a/contentspec.yaml +++ b/contentspec.yaml @@ -13,3 +13,4 @@ params: event_driven_architecture_lab_yaml : "https://s3.amazonaws.com/amazon-dynamodb-labs.com/assets/event-driven-cfn.yaml" github_contributing_guide : "https://github.com/aws-samples/amazon-dynamodb-labs/blob/master/CONTRIBUTING.md" github_issues_link : "https://github.com/aws-samples/amazon-dynamodb-labs/issues" + lsql_git_commit : "47a43bedf75bc0859e9141ad1bdd1f330f0933f1" diff --git a/static/images/relational-migration/customers.png b/static/images/relational-migration/customers.png new file mode 100644 index 00000000..2a915da1 Binary files /dev/null and b/static/images/relational-migration/customers.png differ diff --git a/static/images/relational-migration/customers_ddb.png b/static/images/relational-migration/customers_ddb.png new file mode 100644 index 00000000..33319e38 Binary files /dev/null and b/static/images/relational-migration/customers_ddb.png differ diff --git a/static/images/relational-migration/customers_indexes_ddb.png b/static/images/relational-migration/customers_indexes_ddb.png new file mode 100644 index 00000000..0409c3fa Binary files /dev/null and b/static/images/relational-migration/customers_indexes_ddb.png differ diff --git a/static/images/relational-migration/customers_indexes_ddb_form.png b/static/images/relational-migration/customers_indexes_ddb_form.png new file mode 100644 index 00000000..b647911d Binary files /dev/null and b/static/images/relational-migration/customers_indexes_ddb_form.png differ diff --git a/static/images/relational-migration/ddb_gsi.png b/static/images/relational-migration/ddb_gsi.png new file mode 100644 index 00000000..9b502ff7 Binary files /dev/null and b/static/images/relational-migration/ddb_gsi.png differ diff --git a/static/images/relational-migration/extract.png b/static/images/relational-migration/extract.png new file mode 100644 index 00000000..52f1f38d Binary files /dev/null and b/static/images/relational-migration/extract.png differ diff --git a/static/images/relational-migration/fk_result.png b/static/images/relational-migration/fk_result.png new file mode 100644 index 00000000..58efcaa5 Binary files /dev/null and b/static/images/relational-migration/fk_result.png differ diff --git a/static/images/relational-migration/foreign_key.png b/static/images/relational-migration/foreign_key.png new file mode 100644 index 00000000..b695c013 Binary files /dev/null and b/static/images/relational-migration/foreign_key.png differ diff --git a/static/images/relational-migration/frontpage.png b/static/images/relational-migration/frontpage.png new file mode 100644 index 00000000..b36d14f0 Binary files /dev/null and b/static/images/relational-migration/frontpage.png differ diff --git a/static/images/relational-migration/import-from-s3.png b/static/images/relational-migration/import-from-s3.png new file mode 100644 index 00000000..c6079aec Binary files /dev/null and b/static/images/relational-migration/import-from-s3.png differ diff --git a/static/images/relational-migration/import.png b/static/images/relational-migration/import.png new file mode 100644 index 00000000..520e66d3 Binary files /dev/null and b/static/images/relational-migration/import.png differ diff --git a/static/images/relational-migration/migrate_flow.png b/static/images/relational-migration/migrate_flow.png new file mode 100644 index 00000000..6f4e69ce Binary files /dev/null and b/static/images/relational-migration/migrate_flow.png differ diff --git a/static/images/relational-migration/migrate_output.png b/static/images/relational-migration/migrate_output.png new file mode 100644 index 00000000..2a6627fd Binary files /dev/null and b/static/images/relational-migration/migrate_output.png differ diff --git a/static/images/relational-migration/mysql_s3_output.png b/static/images/relational-migration/mysql_s3_output.png new file mode 100644 index 00000000..3abf6fee Binary files /dev/null and b/static/images/relational-migration/mysql_s3_output.png differ diff --git a/static/images/relational-migration/mysql_s3_write_output.png b/static/images/relational-migration/mysql_s3_write_output.png new file mode 100644 index 00000000..cfe4752d Binary files /dev/null and b/static/images/relational-migration/mysql_s3_write_output.png differ diff --git a/static/images/relational-migration/oneforone.png b/static/images/relational-migration/oneforone.png new file mode 100644 index 00000000..97587abb Binary files /dev/null and b/static/images/relational-migration/oneforone.png differ diff --git a/static/images/relational-migration/orderlines.png b/static/images/relational-migration/orderlines.png new file mode 100644 index 00000000..f23ccfd7 Binary files /dev/null and b/static/images/relational-migration/orderlines.png differ diff --git a/static/images/relational-migration/phases.png b/static/images/relational-migration/phases.png new file mode 100644 index 00000000..22dcd66d Binary files /dev/null and b/static/images/relational-migration/phases.png differ diff --git a/static/images/relational-migration/querying_tab.png b/static/images/relational-migration/querying_tab.png new file mode 100644 index 00000000..d2a658d7 Binary files /dev/null and b/static/images/relational-migration/querying_tab.png differ diff --git a/static/images/relational-migration/range_expression.png b/static/images/relational-migration/range_expression.png new file mode 100644 index 00000000..57f51fb3 Binary files /dev/null and b/static/images/relational-migration/range_expression.png differ diff --git a/static/images/relational-migration/rationales.png b/static/images/relational-migration/rationales.png new file mode 100644 index 00000000..08eba3d0 Binary files /dev/null and b/static/images/relational-migration/rationales.png differ diff --git a/static/images/relational-migration/relational-stack.png b/static/images/relational-migration/relational-stack.png new file mode 100644 index 00000000..fd387150 Binary files /dev/null and b/static/images/relational-migration/relational-stack.png differ diff --git a/static/images/relational-migration/relational_schema.png b/static/images/relational-migration/relational_schema.png new file mode 100644 index 00000000..c045043f Binary files /dev/null and b/static/images/relational-migration/relational_schema.png differ diff --git a/static/images/relational-migration/setenv.png b/static/images/relational-migration/setenv.png new file mode 100644 index 00000000..641bf659 Binary files /dev/null and b/static/images/relational-migration/setenv.png differ diff --git a/static/images/relational-migration/singletableview.png b/static/images/relational-migration/singletableview.png new file mode 100644 index 00000000..57e47b26 Binary files /dev/null and b/static/images/relational-migration/singletableview.png differ diff --git a/static/images/relational-migration/sparse.png b/static/images/relational-migration/sparse.png new file mode 100644 index 00000000..83fc572f Binary files /dev/null and b/static/images/relational-migration/sparse.png differ diff --git a/static/images/relational-migration/stacked.png b/static/images/relational-migration/stacked.png new file mode 100644 index 00000000..af5c310e Binary files /dev/null and b/static/images/relational-migration/stacked.png differ diff --git a/static/images/relational-migration/starting.png b/static/images/relational-migration/starting.png new file mode 100644 index 00000000..bab0f053 Binary files /dev/null and b/static/images/relational-migration/starting.png differ diff --git a/static/images/relational-migration/type_conversion.png b/static/images/relational-migration/type_conversion.png new file mode 100644 index 00000000..efc45151 Binary files /dev/null and b/static/images/relational-migration/type_conversion.png differ diff --git a/static/images/relational-migration/vehicles.png b/static/images/relational-migration/vehicles.png new file mode 100644 index 00000000..30fd3ef1 Binary files /dev/null and b/static/images/relational-migration/vehicles.png differ diff --git a/static/images/relational-migration/view_result.png b/static/images/relational-migration/view_result.png new file mode 100644 index 00000000..03925881 Binary files /dev/null and b/static/images/relational-migration/view_result.png differ