Data engineering is the software engineering that enables data scientists to work effectively
Collection of some interesting pieces from my projects (2019 .. 2023). Spark, Scala, Python, sh
Two main categories:
Plus one derivative, 'Spark/Scala stuff' after migration to Spark3:
For full description see docs inside.
- To build uber-jar use
sbt assembly
command. - To run unit tests use
sbt test
command. - To run integration tests ... TBD.
Manual directory cleanup: find . -depth -type d \( -name target -or -name .bloop -or -name .bsp -or -name .metals -or -name metastore_db -or -name spark-warehouse \) -exec echo rm -rfv {} \;
Bash script, build fat-jar for Spark.
Example: bash -xve ./cicd/build_uber_jar.sh ./etl-ml-pieces.scala/ /tmp/workdir/PACKAGES
Docker image, contains tools for uber-jar builder.
Build docker image: docker buildx build -f cicd/jarbuilder.Dockerfile --tag docker/jarbuilder:0.1.0 .
Build uber-jar using docker container:
docker run \
-it --rm \
--mount type=bind,src=./etl-ml-pieces.scala,dst=/sbtproject \
--workdir /sbtproject \
docker/jarbuilder:0.1.0 \
sbt assembly && \
cp -v etl-ml-pieces.scala/target/scala-2.11/*.jar /tmp/
Run interactive sbt shell in docker container:
docker run \
-it --rm \
--mount type=bind,src=./etl-ml-pieces.scala,dst=/sbtproject \
--workdir /sbtproject \
docker/jarbuilder:0.1.0 \
sbt -v --mem 4096