Skip to content

Commit

Permalink
Merge pull request #2866 from szarnyasg/bash-code-blocks
Browse files Browse the repository at this point in the history
Simplify Bash code blocks
  • Loading branch information
szarnyasg authored May 14, 2024
2 parents 1dab9ae + 5efdf15 commit dddd458
Show file tree
Hide file tree
Showing 15 changed files with 80 additions and 48 deletions.
2 changes: 1 addition & 1 deletion docs/api/adbc.md
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,7 @@ StatementExecuteQuery(&adbc_statement, nullptr, nullptr, &adbc_error);

The first thing to do is to use `pip` and install the ADBC Driver manager. You will also need to install the `pyarrow` to directly access Apache Arrow formatted result sets (such as using `fetch_arrow_table`).

```shell
```bash
pip install adbc_driver_manager pyarrow
```

Expand Down
14 changes: 7 additions & 7 deletions docs/api/cli/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ If in a PowerShell or POSIX shell environment, use the command `./duckdb` instea
The typical usage of the `duckdb` command is the following:

```bash
$ duckdb [OPTIONS] [FILENAME]
duckdb [OPTIONS] [FILENAME]
```

### Options
Expand All @@ -45,7 +45,7 @@ When no `[FILENAME]` argument is provided, the DuckDB CLI will open a temporary
You will see DuckDB's version number, the information on the connection and a prompt starting with a `D`.

```bash
$ duckdb
duckdb
```

```text
Expand Down Expand Up @@ -202,7 +202,7 @@ Note that the duck head is built with Unicode characters and does not work in al
To invoke that file on initialization, use this command:

```bash
$ duckdb -init prompt.sql
duckdb -init prompt.sql
```

This outputs:
Expand All @@ -221,13 +221,13 @@ Use ".open FILENAME" to reopen on a persistent database.
To read/process a file and exit immediately, pipe the file contents in to `duckdb`:

```bash
$ duckdb < select_example.sql
duckdb < select_example.sql
```

To execute a command with SQL text passed in directly from the command line, call `duckdb` with two arguments: the database location (or `:memory:`), and a string with the SQL statement to execute.

```bash
$ duckdb :memory: "SELECT 42 AS the_answer"
duckdb :memory: "SELECT 42 AS the_answer"
```

## Loading Extensions
Expand Down Expand Up @@ -255,7 +255,7 @@ COPY (SELECT 42 AS woot UNION ALL SELECT 43 AS woot) TO 'test.csv' (HEADER);
First, read a file and pipe it to the `duckdb` CLI executable. As arguments to the DuckDB CLI, pass in the location of the database to open, in this case, an in-memory database, and a SQL command that utilizes `/dev/stdin` as a file location.

```bash
$ cat test.csv | duckdb :memory: "SELECT * FROM read_csv('/dev/stdin')"
cat test.csv | duckdb :memory: "SELECT * FROM read_csv('/dev/stdin')"
```

| woot |
Expand All @@ -266,7 +266,7 @@ $ cat test.csv | duckdb :memory: "SELECT * FROM read_csv('/dev/stdin')"
To write back to stdout, the copy command can be used with the `/dev/stdout` file location.

```sql
$ cat test.csv | duckdb :memory: "COPY (SELECT * FROM read_csv('/dev/stdin')) TO '/dev/stdout' WITH (FORMAT 'csv', HEADER)"
cat test.csv | duckdb :memory: "COPY (SELECT * FROM read_csv('/dev/stdin')) TO '/dev/stdout' WITH (FORMAT 'csv', HEADER)"
```

```csv
Expand Down
3 changes: 1 addition & 2 deletions docs/api/odbc/linux.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,8 +41,7 @@ sudo yum install unixODBC
To extract them, run:

```bash
mkdir duckdb_odbc
unzip duckdb_odbc-linux-amd64.zip -d duckdb_odbc
mkdir duckdb_odbc && unzip duckdb_odbc-linux-amd64.zip -d duckdb_odbc
```

3. The `unixodbc_setup.sh` script performs the configuration of the DuckDB ODBC Driver. It is based on the unixODBC package that provides some commands to handle the ODBC setup and test like `odbcinst` and `isql`.
Expand Down
3 changes: 1 addition & 2 deletions docs/api/odbc/macos.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,8 +20,7 @@ title: ODBC API on macOS
3. The archive contains the `libduckdb_odbc.dylib` artifact. To extract it to a directory, run:

```bash
mkdir duckdb_odbc
unzip duckdb_odbc-osx-universal.zip -d duckdb_odbc
mkdir duckdb_odbc && unzip duckdb_odbc-osx-universal.zip -d duckdb_odbc
```

4. There are two ways to configure the ODBC driver, either by initializing via the configuration files, or by connecting with [`SQLDriverConnect`](https://learn.microsoft.com/en-us/sql/odbc/reference/syntax/sqldriverconnect-function?view=sql-server-ver16).
Expand Down
3 changes: 1 addition & 2 deletions docs/api/odbc/windows.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,8 +20,7 @@ Using the DuckDB ODBC API on Windows requires the following steps:
Decompress the archive to a directory (e.g., `duckdb_odbc`). For example, run:

```bash
mkdir duckdb_odbc
unzip duckdb_odbc-windows-amd64.zip -d duckdb_odbc
mkdir duckdb_odbc && unzip duckdb_odbc-windows-amd64.zip -d duckdb_odbc
```

4. The `odbc_install.exe` binary performs the configuration of the DuckDB ODBC Driver on Windows. It depends on the `Odbccp32.dll` that provides functions to configure the ODBC registry entries.
Expand Down
3 changes: 2 additions & 1 deletion docs/data/csv/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,8 +29,9 @@ SELECT * FROM read_csv('flights.csv',
});
```

Read a CSV from stdin, auto-infer options:

```bash
# read a CSV from stdin, auto-infer options
cat flights.csv | duckdb -c "SELECT * FROM read_csv('/dev/stdin')"
```

Expand Down
3 changes: 2 additions & 1 deletion docs/data/json/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,8 +25,9 @@ FROM read_json('todos.json',
completed: 'BOOLEAN'});
```

Read a JSON file from stdin, auto-infer options:

```bash
# read a JSON file from stdin, auto-infer options
cat data/json/todos.json | duckdb -c "SELECT * FROM read_json_auto('/dev/stdin')"
```

Expand Down
3 changes: 1 addition & 2 deletions docs/dev/building/build_instructions.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,8 +21,7 @@ sudo yum install -y git g++ cmake ninja-build openssl-devel
Ubuntu and Debian:

```bash
sudo apt-get update
sudo apt-get install -y git g++ cmake ninja-build libssl-dev
sudo apt-get update && sudo apt-get install -y git g++ cmake ninja-build libssl-dev
```

Alpine Linux:
Expand Down
12 changes: 10 additions & 2 deletions docs/dev/building/building_extensions.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,9 +13,17 @@ For example, to install the [`httpfs` extension](../../extensions/httpfs), run t

```bash
GEN=ninja BUILD_HTTPFS=1 make
# for release builds
```

For release builds:

```bash
build/release/duckdb -c "INSTALL 'build/release/extension/httpfs/httpfs.duckdb_extension';"
# for debug builds
```

For debug builds:

```bash
build/debug/duckdb -c "INSTALL 'build/debug/extension/httpfs/httpfs.duckdb_extension';"
```

Expand Down
5 changes: 5 additions & 0 deletions docs/dev/building/troubleshooting.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,5 +75,10 @@ CMake Error at /usr/share/cmake-3.22/Modules/FindPackageHandleStandardArgs.cmake

```bash
sudo apt-get install -y libssl-dev
```

Then, build with:

```bash
GEN=ninja BUILD_HTTPFS=1 make
```
27 changes: 16 additions & 11 deletions docs/dev/sqllogictest/debugging.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,48 +21,53 @@ You can also skip certain queries from executing by placing `mode skip` in the f
## Triggering Which Tests to Run

When running the unittest program, by default all the fast tests are run. A specific test can be run by adding the name of the test as an argument. For the sqllogictests, this is the relative path to the test file.
To run only a single test:

```bash
# run only a single test
build/debug/test/unittest test/sql/projection/test_simple_projection.test
```

All tests in a given directory can be executed by providing the directory as a parameter with square brackets.
To run all tests in the "projection" directory:

```bash
# run all tests in the "projection" directory
build/debug/test/unittest "[projection]"
```


All tests, including the slow tests, can be run by running the tests with an asterisk.
To run all tests, including the slow tests:

```bash
# run all tests, including the slow tests
build/debug/test/unittest "*"
```

We can run a subset of the tests using the `--start-offset` and `--end-offset` parameters:
We can run a subset of the tests using the `--start-offset` and `--end-offset` parameters.
To run the tests 200..250:

```bash
# run tests the tests 200..250
build/debug/test/unittest --start-offset=200 --end-offset=250
```

These are also available in percentages:
These are also available in percentages. To run tests 10% - 20%:

```bash
# run tests 10% - 20%
build/debug/test/unittest --start-offset-percentage=10 --end-offset-percentage=20
```

The set of tests to run can also be loaded from a file containing one test name per line, and loaded using the `-f` command.

```bash
$ cat test.list
cat test.list
```

```text
test/sql/join/full_outer/test_full_outer_join_issue_4252.test
test/sql/join/full_outer/full_outer_join_cache.test
test/sql/join/full_outer/test_full_outer_join.test
# run only the tests labeled in the file
$ build/debug/test/unittest -f test.list
```

To run only the tests labeled in the file:

```bash
build/debug/test/unittest -f test.list
```
16 changes: 11 additions & 5 deletions docs/guides/data_viewers/tableau.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,10 @@ In addition to a large number of built in connectors,
it also provides generic database connectivity via ODBC and JDBC connectors.

Tableau has two main versions: Desktop and Online (Server).

* For Desktop, connecting to a DuckDB database is similar to working in an embedded environment like Python.
* For Online, since DuckDB is in-process, the data needs to be either on the server itself

or in a remote data bucket that is accessible from the server.

## Database Creation
Expand Down Expand Up @@ -85,19 +87,23 @@ On Linux, copy the Taco file to `/opt/tableau/connectors`.
On Windows, copy the Taco file to `C:\Program Files\Tableau\Connectors`.
Then issue these commands to disable signature validation:

```sh
$ tsm configuration set -k native_api.disable_verify_connector_plugin_signature -v true
$ tsm pending-changes apply
```bash
tsm configuration set -k native_api.disable_verify_connector_plugin_signature -v true
```

```bash
tsm pending-changes apply
```

The last command will restart the server with the new settings.

### macOS

Copy the Taco file to the `/Users/[User]/Documents/My Tableau Repository/Connectors` folder.
Then launch Tableau Desktop from the Terminal with the command line argument to disable signature validation:

```sh
$ /Applications/Tableau\ Desktop\ ⟨year⟩.⟨quarter⟩.app/Contents/MacOS/Tableau -DDisableVerifyConnectorPluginSignature=true
```bash
/Applications/Tableau\ Desktop\ ⟨year⟩.⟨quarter⟩.app/Contents/MacOS/Tableau -DDisableVerifyConnectorPluginSignature=true
```

You can also package this up with AppleScript by using the following script:
Expand Down
4 changes: 1 addition & 3 deletions docs/guides/data_viewers/youplot.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,9 +68,7 @@ Maybe you're piping some data through `jq`. Maybe you're downloading a JSON file
Let's combine this with a quick `curl` from GitHub to see what a certain user has been up to lately.
```bash
curl -sL "https://api.github.com/users/dacort/events?per_page=100" \
| duckdb -s "COPY (SELECT type, count(*) AS event_count FROM read_json_auto('/dev/stdin') GROUP BY 1 ORDER BY 2 DESC LIMIT 10) TO '/dev/stdout' WITH (FORMAT 'csv', HEADER)" \
| uplot bar -d, -H -t "GitHub Events for @dacort"
curl -sL "https://api.github.com/users/dacort/events?per_page=100" | duckdb -s "COPY (SELECT type, count(*) AS event_count FROM read_json_auto('/dev/stdin') GROUP BY 1 ORDER BY 2 DESC LIMIT 10) TO '/dev/stdout' WITH (FORMAT 'csv', HEADER)" | uplot bar -d, -H -t "GitHub Events for @dacort"
```
![github-events](/images/guides/youplot/github-events.png)
20 changes: 15 additions & 5 deletions docs/guides/odbc/general.md
Original file line number Diff line number Diff line change
Expand Up @@ -107,14 +107,24 @@ The first step is to include the SQL header files:
#include <sqlext.h>
```

These files contain the definitions of the ODBC functions, as well as the data types used by ODBC. In order to be able to use these header files you have to have the `unixodbc` package installed:
These files contain the definitions of the ODBC functions, as well as the data types used by ODBC. In order to be able to use these header files you have to have the `unixodbc` package installed:

On macOS:

```bash
brew install unixodbc
# or
sudo apt-get install unixodbc-dev
# or
sudo yum install unixODBC-devel
```

On Ubuntu and Debian:

```bash
sudo apt-get install -y unixodbc-dev
```

On Fedora, CentOS, and Red Hat:

```bash
sudo yum install -y unixODBC-devel
```

Remember to include the header file location in your `CFLAGS`.
Expand Down
10 changes: 6 additions & 4 deletions docs/internals/storage.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,11 +27,10 @@ To move your database(s) to newer format you only need the older and the newer D
Open your database file with the older DuckDB and run the SQL statement `EXPORT DATABASE 'tmp'`. This allows you to save the whole state of the current database in use inside folder `tmp`.
The content of the `tmp` folder will be overridden, so choose an empty/non yet existing location. Then, start the newer DuckDB and execute `IMPORT DATABASE 'tmp'` (pointing to the previously populated folder) to load the database, which can be then saved to the file you pointed DuckDB to.

A bash two-liner (to be adapted with the file names and executable locations) is:
A bash one-liner (to be adapted with the file names and executable locations) is:

```bash
$ /older/version/duckdb mydata.db -c "EXPORT DATABASE 'tmp'"
$ /newer/duckdb mydata.new.db -c "IMPORT DATABASE 'tmp'"
/older/version/duckdb mydata.db -c "EXPORT DATABASE 'tmp'" && /newer/duckdb mydata.new.db -c "IMPORT DATABASE 'tmp'"
```

After this `mydata.db` will be untouched with the old format, `mydata.new.db` will contain the same data but in a format accessible from more recent DuckDB, and folder `tmp` will old the same data in an universal format as different files.
Expand All @@ -43,7 +42,10 @@ Check [`EXPORT` documentation](../sql/statements/export) for more details on the
DuckDB files start with a `uint64_t` which contains a checksum for the main header, followed by four magic bytes (`DUCK`), followed by the storage version number in a `uint64_t`.

```bash
$ hexdump -n 20 -C mydata.db
hexdump -n 20 -C mydata.db
```

```text
00000000 01 d0 e2 63 9c 13 39 3e 44 55 43 4b 2b 00 00 00 |...c..9>DUCK+...|
00000010 00 00 00 00 |....|
00000014
Expand Down

0 comments on commit dddd458

Please sign in to comment.