Skip to content

Commit

Permalink
Fix the figure labels for EN version
Browse files Browse the repository at this point in the history
  • Loading branch information
krahets committed May 6, 2024
1 parent f111249 commit d6167ce
Show file tree
Hide file tree
Showing 18 changed files with 35 additions and 35 deletions.
4 changes: 2 additions & 2 deletions en/docs/chapter_array_and_linkedlist/array.md
Original file line number Diff line number Diff line change
Expand Up @@ -125,7 +125,7 @@ Elements in an array are stored in contiguous memory spaces, making it simpler t

![Memory address calculation for array elements](array.assets/array_memory_location_calculation.png)

As observed in the above illustration, array indexing conventionally begins at $0$. While this might appear counterintuitive, considering counting usually starts at $1$, within the address calculation formula, **an index is essentially an offset from the memory address**. For the first element's address, this offset is $0$, validating its index as $0$.
As observed in the figure above, array indexing conventionally begins at $0$. While this might appear counterintuitive, considering counting usually starts at $1$, within the address calculation formula, **an index is essentially an offset from the memory address**. For the first element's address, this offset is $0$, validating its index as $0$.

Accessing elements in an array is highly efficient, allowing us to randomly access any element in $O(1)$ time.

Expand All @@ -135,7 +135,7 @@ Accessing elements in an array is highly efficient, allowing us to randomly acce

### Inserting elements

Array elements are tightly packed in memory, with no space available to accommodate additional data between them. Illustrated in Figure below, inserting an element in the middle of an array requires shifting all subsequent elements back by one position to create room for the new element.
Array elements are tightly packed in memory, with no space available to accommodate additional data between them. As illustrated in the figure below, inserting an element in the middle of an array requires shifting all subsequent elements back by one position to create room for the new element.

![Array element insertion example](array.assets/array_insert_element.png)

Expand Down
8 changes: 4 additions & 4 deletions en/docs/chapter_array_and_linkedlist/linked_list.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ The design of linked lists allows for their nodes to be distributed across memor

![Linked list definition and storage method](linked_list.assets/linkedlist_definition.png)

As shown in the figure, we see that the basic building block of a linked list is the <u>node</u> object. Each node comprises two key components: the node's "value" and a "reference" to the next node.
As shown in the figure above, we see that the basic building block of a linked list is the <u>node</u> object. Each node comprises two key components: the node's "value" and a "reference" to the next node.

- The first node in a linked list is the "head node", and the final one is the "tail node".
- The tail node points to "null", designated as `null` in Java, `nullptr` in C++, and `None` in Python.
Expand Down Expand Up @@ -406,7 +406,7 @@ The array as a whole is a variable, for instance, the array `nums` includes elem

### Inserting nodes

Inserting a node into a linked list is very easy. As shown in the figure, let's assume we aim to insert a new node `P` between two adjacent nodes `n0` and `n1`. **This can be achieved by simply modifying two node references (pointers)**, with a time complexity of $O(1)$.
Inserting a node into a linked list is very easy. As shown in the figure below, let's assume we aim to insert a new node `P` between two adjacent nodes `n0` and `n1`. **This can be achieved by simply modifying two node references (pointers)**, with a time complexity of $O(1)$.

By comparison, inserting an element into an array has a time complexity of $O(n)$, which becomes less efficient when dealing with large data volumes.

Expand All @@ -418,7 +418,7 @@ By comparison, inserting an element into an array has a time complexity of $O(n)

### Deleting nodes

As shown in the figure, deleting a node from a linked list is also very easy, **involving only the modification of a single node's reference (pointer)**.
As shown in the figure below, deleting a node from a linked list is also very easy, **involving only the modification of a single node's reference (pointer)**.

It's important to note that even though node `P` continues to point to `n1` after being deleted, it becomes inaccessible during linked list traversal. This effectively means that `P` is no longer a part of the linked list.

Expand Down Expand Up @@ -461,7 +461,7 @@ The table below summarizes the characteristics of arrays and linked lists, and i

## Common types of linked lists

As shown in the figure, there are three common types of linked lists.
As shown in the figure below, there are three common types of linked lists.

- **Singly linked list**: This is the standard linked list described earlier. Nodes in a singly linked list include a value and a reference to the next node. The first node is known as the head node, and the last node, which points to null (`None`), is the tail node.
- **Circular linked list**: This is formed when the tail node of a singly linked list points back to the head node, creating a loop. In a circular linked list, any node can function as the head node.
Expand Down
2 changes: 1 addition & 1 deletion en/docs/chapter_array_and_linkedlist/summary.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ If an element is searched first and then deleted, the time complexity is indeed

**Q**: In the figure "Linked List Definition and Storage Method", do the light blue storage nodes occupy a single memory address, or do they share half with the node value?

The diagram is just a qualitative representation; quantitative analysis depends on specific situations.
The figure is just a qualitative representation; quantitative analysis depends on specific situations.

- Different types of node values occupy different amounts of space, such as int, long, double, and object instances.
- The memory space occupied by pointer variables depends on the operating system and compilation environment used, usually 8 bytes or 4 bytes.
Expand Down
6 changes: 3 additions & 3 deletions en/docs/chapter_backtracking/backtracking_algorithm.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Backtracking typically employs "depth-first search" to traverse the solution spa

Given a binary tree, search and record all nodes with a value of $7$, please return a list of nodes.

For this problem, we traverse this tree in preorder and check if the current node's value is $7$. If it is, we add the node's value to the result list `res`. The relevant process is shown in the following diagram and code:
For this problem, we traverse this tree in preorder and check if the current node's value is $7$. If it is, we add the node's value to the result list `res`. The relevant process is shown in the figure below:

```src
[file]{preorder_traversal_i_compact}-[class]{}-[func]{pre_order}
Expand Down Expand Up @@ -85,7 +85,7 @@ To meet the above constraints, **we need to add a pruning operation**: during th
[file]{preorder_traversal_iii_compact}-[class]{}-[func]{pre_order}
```

"Pruning" is a very vivid noun. As shown in the diagram below, in the search process, **we "cut off" the search branches that do not meet the constraints**, avoiding many meaningless attempts, thus enhancing the search efficiency.
"Pruning" is a very vivid noun. As shown in the figure below, in the search process, **we "cut off" the search branches that do not meet the constraints**, avoiding many meaningless attempts, thus enhancing the search efficiency.

![Pruning based on constraints](backtracking_algorithm.assets/preorder_find_constrained_paths.png)

Expand Down Expand Up @@ -421,7 +421,7 @@ Next, we solve Example Three based on the framework code. The `state` is the nod
[file]{preorder_traversal_iii_template}-[class]{}-[func]{backtrack}
```

As per the requirements, after finding a node with a value of $7$, the search should continue, **thus the `return` statement after recording the solution should be removed**. The following diagram compares the search processes with and without retaining the `return` statement.
As per the requirements, after finding a node with a value of $7$, the search should continue, **thus the `return` statement after recording the solution should be removed**. The figure below compares the search processes with and without retaining the `return` statement.

![Comparison of retaining and removing the return in the search process](backtracking_algorithm.assets/backtrack_remove_return_or_not.png)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ The following function uses a `for` loop to perform a summation of $1 + 2 + \dot
[file]{iteration}-[class]{}-[func]{for_loop}
```

The flowchart below represents this sum function.
The figure below represents this sum function.

![Flowchart of the sum function](iteration_and_recursion.assets/iteration.png)

Expand Down Expand Up @@ -50,7 +50,7 @@ We can nest one loop structure within another. Below is an example using `for` l
[file]{iteration}-[class]{}-[func]{nested_for_loop}
```

The flowchart below represents this nested loop.
The figure below represents this nested loop.

![Flowchart of the nested loop](iteration_and_recursion.assets/nested_iteration.png)

Expand Down Expand Up @@ -147,7 +147,7 @@ Using the recursive relation, and considering the first two numbers as terminati
[file]{recursion}-[class]{}-[func]{fib}
```

Observing the above code, we see that it recursively calls two functions within itself, **meaning that one call generates two branching calls**. As illustrated below, this continuous recursive calling eventually creates a <u>recursion tree</u> with a depth of $n$.
Observing the above code, we see that it recursively calls two functions within itself, **meaning that one call generates two branching calls**. As illustrated in the figure below, this continuous recursive calling eventually creates a <u>recursion tree</u> with a depth of $n$.

![Fibonacci sequence recursion tree](iteration_and_recursion.assets/recursion_tree.png)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -725,7 +725,7 @@ The time complexity of both `loop()` and `recur()` functions is $O(n)$, but thei

## Common types

Let the size of the input data be $n$, the following chart displays common types of space complexities (arranged from low to high).
Let the size of the input data be $n$, the figure below displays common types of space complexities (arranged from low to high).

$$
\begin{aligned}
Expand Down
4 changes: 2 additions & 2 deletions en/docs/chapter_computational_complexity/time_complexity.md
Original file line number Diff line number Diff line change
Expand Up @@ -669,7 +669,7 @@ In essence, time complexity analysis is about finding the asymptotic upper bound

If there exist positive real numbers $c$ and $n_0$ such that for all $n > n_0$, $T(n) \leq c \cdot f(n)$, then $f(n)$ is considered an asymptotic upper bound of $T(n)$, denoted as $T(n) = O(f(n))$.

As illustrated below, calculating the asymptotic upper bound involves finding a function $f(n)$ such that, as $n$ approaches infinity, $T(n)$ and $f(n)$ have the same growth order, differing only by a constant factor $c$.
As shown in the figure below, calculating the asymptotic upper bound involves finding a function $f(n)$ such that, as $n$ approaches infinity, $T(n)$ and $f(n)$ have the same growth order, differing only by a constant factor $c$.

![Asymptotic upper bound of a function](time_complexity.assets/asymptotic_upper_bound.png)

Expand Down Expand Up @@ -951,7 +951,7 @@ The following table illustrates examples of different operation counts and their

## Common types of time complexity

Let's consider the input data size as $n$. The common types of time complexities are illustrated below, arranged from lowest to highest:
Let's consider the input data size as $n$. The common types of time complexities are shown in the figure below, arranged from lowest to highest:

$$
\begin{aligned}
Expand Down
4 changes: 2 additions & 2 deletions en/docs/chapter_data_structure/number_encoding.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Firstly, it's important to note that **numbers are stored in computers using the
- **One's complement**: The one's complement of a positive number is the same as its sign-magnitude. For negative numbers, it's obtained by inverting all bits except the sign bit.
- **Two's complement**: The two's complement of a positive number is the same as its sign-magnitude. For negative numbers, it's obtained by adding $1$ to their one's complement.

The following diagram illustrates the conversions among sign-magnitude, one's complement, and two's complement:
The figure below illustrates the conversions among sign-magnitude, one's complement, and two's complement:

![Conversions between sign-magnitude, one's complement, and two's complement](number_encoding.assets/1s_2s_complement.png)

Expand Down Expand Up @@ -125,7 +125,7 @@ $$

![Example calculation of a float in IEEE 754 standard](number_encoding.assets/ieee_754_float.png)

Observing the diagram, given an example data $\mathrm{S} = 0$, $\mathrm{E} = 124$, $\mathrm{N} = 2^{-2} + 2^{-3} = 0.375$, we have:
Observing the figure above, given an example data $\mathrm{S} = 0$, $\mathrm{E} = 124$, $\mathrm{N} = 2^{-2} + 2^{-3} = 0.375$, we have:

$$
\text{val} = (-1)^0 \times 2^{124 - 127} \times (1 + 0.375) = 0.171875
Expand Down
2 changes: 1 addition & 1 deletion en/docs/chapter_divide_and_conquer/binary_search_recur.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ Starting from the original problem $f(0, n-1)$, perform the binary search throug
2. Recursively solve the subproblem reduced by half in size, which could be $f(i, m-1)$ or $f(m+1, j)$.
3. Repeat steps `1.` and `2.`, until `target` is found or the interval is empty and returns.

The diagram below shows the divide-and-conquer process of binary search for element $6$ in an array.
The figure below shows the divide-and-conquer process of binary search for element $6$ in an array.

![The divide-and-conquer process of binary search](binary_search_recur.assets/binary_search_recur.png)

Expand Down
14 changes: 7 additions & 7 deletions en/docs/chapter_divide_and_conquer/build_binary_tree_problem.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

!!! question

Given the preorder traversal `preorder` and inorder traversal `inorder` of a binary tree, construct the binary tree and return the root node of the binary tree. Assume that there are no duplicate values in the nodes of the binary tree (as shown in the diagram below).
Given the preorder traversal `preorder` and inorder traversal `inorder` of a binary tree, construct the binary tree and return the root node of the binary tree. Assume that there are no duplicate values in the nodes of the binary tree (as shown in the figure below).

![Example data for building a binary tree](build_binary_tree_problem.assets/build_tree_example.png)

Expand All @@ -20,10 +20,10 @@ Based on the above analysis, this problem can be solved using divide and conquer

By definition, `preorder` and `inorder` can be divided into three parts.

- Preorder traversal: `[ Root | Left Subtree | Right Subtree ]`, for example, the tree in the diagram corresponds to `[ 3 | 9 | 2 1 7 ]`.
- Inorder traversal: `[ Left Subtree | Root | Right Subtree ]`, for example, the tree in the diagram corresponds to `[ 9 | 3 | 1 2 7 ]`.
- Preorder traversal: `[ Root | Left Subtree | Right Subtree ]`, for example, the tree in the figure corresponds to `[ 3 | 9 | 2 1 7 ]`.
- Inorder traversal: `[ Left Subtree | Root | Right Subtree ]`, for example, the tree in the figure corresponds to `[ 9 | 3 | 1 2 7 ]`.

Using the data in the diagram above, we can obtain the division results as shown in the steps below.
Using the data in the figure above, we can obtain the division results as shown in the figure below.

1. The first element 3 in the preorder traversal is the value of the root node.
2. Find the index of the root node 3 in `inorder`, and use this index to divide `inorder` into `[ 9 | 3 | 1 2 7 ]`.
Expand All @@ -49,7 +49,7 @@ As shown in the table below, the above variables can represent the index of the
| Left subtree | $i + 1$ | $[l, m-1]$ |
| Right subtree | $i + 1 + (m - l)$ | $[m+1, r]$ |

Please note, the meaning of $(m-l)$ in the right subtree root index is "the number of nodes in the left subtree", which is suggested to be understood in conjunction with the diagram below.
Please note, the meaning of $(m-l)$ in the right subtree root index is "the number of nodes in the left subtree", which is suggested to be understood in conjunction with the figure below.

![Indexes of the root node and left and right subtrees](build_binary_tree_problem.assets/build_tree_division_pointers.png)

Expand All @@ -61,7 +61,7 @@ To improve the efficiency of querying $m$, we use a hash table `hmap` to store t
[file]{build_tree}-[class]{}-[func]{build_tree}
```

The diagram below shows the recursive process of building the binary tree, where each node is established during the "descending" process, and each edge (reference) is established during the "ascending" process.
The figure below shows the recursive process of building the binary tree, where each node is established during the "descending" process, and each edge (reference) is established during the "ascending" process.

=== "<1>"
![Recursive process of building a binary tree](build_binary_tree_problem.assets/built_tree_step1.png)
Expand Down Expand Up @@ -90,7 +90,7 @@ The diagram below shows the recursive process of building the binary tree, where
=== "<9>"
![built_tree_step9](build_binary_tree_problem.assets/built_tree_step9.png)

Each recursive function's division results of `preorder` and `inorder` are shown in the diagram below.
Each recursive function's division results of `preorder` and `inorder` are shown in the figure below.

![Division results in each recursive function](build_binary_tree_problem.assets/built_tree_overall.png)

Expand Down
2 changes: 1 addition & 1 deletion en/docs/chapter_dynamic_programming/knapsack_problem.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ Dynamic programming essentially involves filling the $dp$ table during the state
[file]{knapsack}-[class]{}-[func]{knapsack_dp}
```

As shown in the figures below, both the time complexity and space complexity are determined by the size of the array `dp`, i.e., $O(n \times cap)$.
As shown in the figure below, both the time complexity and space complexity are determined by the size of the array `dp`, i.e., $O(n \times cap)$.

=== "<1>"
![The dynamic programming process of the 0-1 knapsack problem](knapsack_problem.assets/knapsack_dp_step1.png)
Expand Down
2 changes: 1 addition & 1 deletion en/docs/chapter_graph/graph_operations.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ Given an undirected graph with a total of $n$ vertices and $m$ edges, the variou
=== "Remove a vertex"
![adjacency_list_remove_vertex](graph_operations.assets/adjacency_list_step5_remove_vertex.png)

Below is the adjacency list code implementation. Compared to the above diagram, the actual code has the following differences.
Below is the adjacency list code implementation. Compared to the figure above, the actual code has the following differences.

- For convenience in adding and removing vertices, and to simplify the code, we use lists (dynamic arrays) instead of linked lists.
- Use a hash table to store the adjacency list, `key` being the vertex instance, `value` being the list (linked list) of adjacent vertices of that vertex.
Expand Down
2 changes: 1 addition & 1 deletion en/docs/chapter_greedy/max_product_cutting_problem.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

!!! question

Given a positive integer $n$, split it into at least two positive integers that sum up to $n$, and find the maximum product of these integers, as illustrated below.
Given a positive integer $n$, split it into at least two positive integers that sum up to $n$, and find the maximum product of these integers, as illustrated in the figure below.

![Definition of the maximum product cutting problem](max_product_cutting_problem.assets/max_product_cutting_definition.png)

Expand Down
2 changes: 1 addition & 1 deletion en/docs/chapter_searching/replace_linear_by_hashing.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ This method has a time complexity of $O(n^2)$ and a space complexity of $O(1)$,

## Hash search: trading space for time

Consider using a hash table, with key-value pairs being the array elements and their indices, respectively. Loop through the array, performing the steps shown in the figures below each round.
Consider using a hash table, with key-value pairs being the array elements and their indices, respectively. Loop through the array, performing the steps shown in the figure below each round.

1. Check if the number `target - nums[i]` is in the hash table. If so, directly return the indices of these two elements.
2. Add the key-value pair `nums[i]` and index `i` to the hash table.
Expand Down
Loading

0 comments on commit d6167ce

Please sign in to comment.