Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix dkms installation of deb packages created with Alien. #15415

Merged
merged 1 commit into from
Nov 7, 2023
Merged

Fix dkms installation of deb packages created with Alien. #15415

merged 1 commit into from
Nov 7, 2023

Conversation

AllKind
Copy link
Contributor

@AllKind AllKind commented Oct 18, 2023

Alien does not honour the %posttrans hook.
So move the dkms uninstall/install scripts to the %pre/%post hooks in case of package install/upgrade.
In case of package removal, handle that in %preun.
Add removal of all old dkms modules.
Add checking for broken 'dkms status'. Handle that as good as possible and warn the user about it.
Also add more verbose messages about what we are doing.

Motivation and Context

This is my first pull request ever. It's been a while since I used git and never before in a collaborating matter.
So please bear with me and I hope not to make too many mistakes.

I don't have any experience with building rpms and transforming them to deb packages.
But I dug myself through https://docs.fedoraproject.org/en-US/packaging-guidelines/Scriptlets/ and https://www.debian.org/doc/debian-policy/ch-maintainerscripts.html to get an idea of the things involved here.

Edit: Fixes: #15328, #15336 and #15253.

Description

In commit #13182 the rpm scriptlet hook %posttrans was introduced.
But as it looks to me, Alien does not honour it, or there is no equivalent script hook for debian packages.
So to make sure the uninstall / install of the dkms modules happens in case of install/upgrade, I moved them to the %pre / %post hooks.
In case of package removal, handle that in %preun.
Also I added a check before running each command, to avoid a non zero exit status for the scripts.
Additionally I added verbose messages about what we are doing.

This change seems to ensure, that for each rpm hook a debian script is created by Alien.
Also I think it will always uninstall the dkms modules before installation of the new package and always will install the dkms modules after the new package has been installed.

How Has This Been Tested?

On my PC with Linux Mint 21.2, which is based on Ubuntu 22.04, I installed the zfs-dkms .deb file.
The dkms version used is 2.8.7.
In case of a fresh install, or an upgrade / re-install, everything went as expected.
Also the remove/purge actions yielded the expected result.

Edit: More tests were done. Descriptions in the replies below.

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Performance enhancement (non-breaking change which improves efficiency)
  • Code cleanup (non-breaking change which makes code smaller or more readable)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Library ABI change (libzfs, libzfs_core, libnvpair, libuutil and libzfsbootenv)
  • Documentation (a change to man pages or other documentation)

Checklist:

@AllKind
Copy link
Contributor Author

AllKind commented Oct 19, 2023

I now did setup a testing environment with virtualbox.
One VM with Fedora 38 and another one with Ubuntu 22.04.
Fedora has dkms version 3.0.12.

On Fedora I discovered an error in my code regarding the %preun scriptlet logic, which didn't turn up on Ubuntu or Mint.
That should be fixed now.

On Fedora I tested the RPM installation with zfs git master (2.2.99).
On Ubuntu I tested the Alien generated DEB files also with 2.2.99.
On Linux Mint the tests were made with a patched version of zfs 2.1.13.

Testing procedure was:
1: A fresh install of zfs and zfs-dkms.
2: A re-install of zfs-dkms.
3: A removal of zfs-dkms.
4: Again an install of zfs-dkms.

All tests go well in regard to the changes I made with this pull request.

Two things I did notice on Fedora:
Uninstalling zfs-dkms also removes zfs.

sudo dnf remove --noautoremove zfs-dkms-2.2.99-158_g01bc522a9.fc38.noarch
Dependencies resolved.
====================================================================================================================================================
 Package                       Architecture                Version                                         Repository                          Size
====================================================================================================================================================
Removing:
 zfs-dkms                      noarch                      2.2.99-158_g01bc522a9.fc38                      @@commandline                       57 M
Removing dependent packages:
 zfs                           x86_64                      2.2.99-158_g01bc522a9.fc38                      @@commandline                      1.8 M

Transaction Summary
====================================================================================================================================================
Remove  2 Packages

Installing only zfs-dkms afterwards, doesn't install zfs though.
I didn't look into the dependency definition yet. That's a thing maybe done another day. Today (actually tonight) I just want to rest. Done enough for one day.

The other thing is, that dkms seems to have problems cleaning the build area (or at least thinks it has). Because everything seems just fine after this:

Building module:
Cleaning build area...(bad exit status: 2)
make -j8 KERNELRELEASE=6.2.9-300.fc38.x86_64...........
Signing module /var/lib/dkms/zfs/2.2.99/build/module/zfs.ko
Signing module /var/lib/dkms/zfs/2.2.99/build/module/spl.ko

Running the post_build script:
Cleaning build area...(bad exit status: 2)

Maybe it's a result of me testing a dozens of things, maybe not. But I don't think it's related to my changes.

So to sum up:
From my current point of view this pull request works with rpm and deb packages - for zfs version 2.1.13 as also for 2.2.0 (or master).

@Binarus
Copy link

Binarus commented Oct 19, 2023

That patch badly breaks things on my Debian bookworm amd64 (vanilla, up to date at the time of writing). After having applied the patch, I can't produce the .deb packages any more:

root@debian /install/zfs-2.2.0 # make deb
make  dist-gzip am__post_remove_distdir='@:'
make[1]: Entering directory '/install/zfs-2.2.0'
make  distdir-am
make[2]: Entering directory '/install/zfs-2.2.0'
if test -d "zfs-2.2.0"; then find "zfs-2.2.0" -type d ! -perm -200 -exec chmod u+w {} ';' && rm -rf "zfs-2.2.0" || { sleep 5 && rm -rf "zfs-2.2.0"; }; else :; fi
test -d "zfs-2.2.0" || mkdir "zfs-2.2.0"
 (cd include && make  top_distdir=../zfs-2.2.0 distdir=../zfs-2.2.0/include \
     am__remove_distdir=: am__skip_length_check=: am__skip_mode_fix=: distdir)
make[3]: Entering directory '/install/zfs-2.2.0/include'
make  distdir-am
make[4]: Entering directory '/install/zfs-2.2.0/include'
make[4]: Leaving directory '/install/zfs-2.2.0/include'
make[3]: Leaving directory '/install/zfs-2.2.0/include'
 (cd tests/zfs-tests/tests && make  top_distdir=../../../zfs-2.2.0 distdir=../../../zfs-2.2.0/tests/zfs-tests/tests \
     am__remove_distdir=: am__skip_length_check=: am__skip_mode_fix=: distdir)
make[3]: Entering directory '/install/zfs-2.2.0/tests/zfs-tests/tests'
make  distdir-am
make[4]: Entering directory '/install/zfs-2.2.0/tests/zfs-tests/tests'
make[4]: Leaving directory '/install/zfs-2.2.0/tests/zfs-tests/tests'
make[3]: Leaving directory '/install/zfs-2.2.0/tests/zfs-tests/tests'
 (cd module && make  top_distdir=../zfs-2.2.0 distdir=../zfs-2.2.0/module \
     am__remove_distdir=: am__skip_length_check=: am__skip_mode_fix=: distdir)
make[3]: Entering directory '/install/zfs-2.2.0/module'
cd . && find . -name '*.[chS]' -exec sh -c 'for f; do mkdir -p $distdir/${f%/*}; cp ./$f $distdir/$f; done' _ {} +
cp ./Makefile.bsd $distdir/Makefile.bsd
make[3]: Leaving directory '/install/zfs-2.2.0/module'
make  \
  top_distdir="zfs-2.2.0" distdir="zfs-2.2.0" \
  dist-hook
make[3]: Entering directory '/install/zfs-2.2.0'
./scripts/make_gitrev.sh -D zfs-2.2.0 include/zfs_gitrev.h
/usr/bin/sed --in-place 's/\(Release:[[:space:]]*\).*/\11/' zfs-2.2.0/META
make[3]: Leaving directory '/install/zfs-2.2.0'
test -n "" \
|| find "zfs-2.2.0" -type d ! -perm -755 \
        -exec chmod u+rwx,go+rx {} \; -o \
  ! -type d ! -perm -444 -links 1 -exec chmod a+r {} \; -o \
  ! -type d ! -perm -400 -exec chmod a+r {} \; -o \
  ! -type d ! -perm -444 -exec /bin/bash /install/zfs-2.2.0/config/install-sh -c -m a+r {} {} \; \
|| chmod -R a+r "zfs-2.2.0"
make[2]: Leaving directory '/install/zfs-2.2.0'
tardir=zfs-2.2.0 && tar --format=posix -chf - "$tardir" | eval GZIP= gzip --best -c >zfs-2.2.0.tar.gz
make[1]: Leaving directory '/install/zfs-2.2.0'
if test -d "zfs-2.2.0"; then find "zfs-2.2.0" -type d ! -perm -200 -exec chmod u+w {} ';' && rm -rf "zfs-2.2.0" || { sleep 5 && rm -rf "zfs-2.2.0"; }; else :; fi
make  pkg="zfs-kmod" \
        def='--define "build_src_rpm 1" ' srpm-common
make[1]: Entering directory '/install/zfs-2.2.0'
make[2]: Entering directory '/install/zfs-2.2.0'
make[2]: Leaving directory '/install/zfs-2.2.0'
warning: Could not canonicalize hostname: debian
Wrote: /tmp/zfs-build-root-cBSrCDFk/SRPMS/zfs-dkms-2.2.0-1.src.rpm

RPM build warnings:
    Could not canonicalize hostname: debian
cp: cannot stat '/tmp/zfs-build-root-cBSrCDFk/SRPMS/zfs-kmod-2.2.0-1*src.rpm': No such file or directory
make[1]: *** [Makefile:14201: srpm-common] Error 1
make[1]: Leaving directory '/install/zfs-2.2.0'
make: *** [Makefile:14141: srpm-kmod] Error 2

I don't know what this is all about, and right now, I don't have the time to dig in. We'd be very happy if you could come to rescue and fix your patch, because 2.1.13 won't compile either, so currently we can't use recent ZFS versions in debian.

Thank you very much, and best regards,

Binarus

@AllKind
Copy link
Contributor Author

AllKind commented Oct 19, 2023

The last force-push was just adding this:
Requires(pre): dkms >= 2.2.0.3

The %pre scriptlet requires dkms. This makes sure it's there at that point.
Added that just for safety and consistency.

@AllKind
Copy link
Contributor Author

AllKind commented Oct 19, 2023

@Binarus
I don't think this is related to my patch.
Because it does not touch the new native debian package generating code.

About your issue with 2.1.13, I'd need more info.

@Binarus
Copy link

Binarus commented Oct 19, 2023

@AllKind Thank you very much for your time! I just have tried compiling 2.2.0 for 8 hours now, to no avail.

But back to our problem with 2.1.13: I have posted it also on the mailing list 9 days ago, but got no reaction. Well, the mailing list is the wrong place :-) The following is a literal copy of my post there:

Hi all,

I am currently trying to compile OpenZFS 2.1.13 on a Debian bullseye stock system, following the process that is outline here:

https://openzfs.github.io/openzfs-docs/Developer%20Resources/Building%20ZFS.html

I don't checkout the master branch, though; instead, I unpack the respective asset (https://github.com/openzfs/zfs/releases/download/zfs-2.1.13/zfs-2.1.13.tar.gz) into a directory and build that.

  • sh autogen.sh runs as expected and does not throw errors.

  • ./configure runs as expected and does not throw errors.

  • make -s -j$(nproc) runs as expected and does not throw errors.

  • make native-deb immediately throws the following error:

make: *** No rule to make target 'native-deb'. Stop.

I don't care about that one, though, because I always could live well with the non-native (rpm-converted) packages. On the other hand, that command is shown in the instructions mentioned above, so I think the problem is worth being reported.

  • The real problem: make deb runs quite a while, but then throws the following error:
make[9]: Entering directory '/tmp/zfs-build-root-OmINm3J0/BUILD/zfs-2.1.13/tests/zfs-tests/tests/functional/cli_root/zpool_events'
  CC       ereports.o
  CCLD     ereports
/usr/bin/ld: /tmp/zfs-build-root-OmINm3J0/BUILD/zfs-2.1.13/lib/libzfs/.libs/libzfs.so: undefined reference to `zfs_setproctitle'
collect2: error: ld returned 1 exit status

There are a few lines more; I'll put the whole excerpt at the end of this message.

Obviously, the linker can't find zfs_proctitle anywhere, either because it isn't there or it can't find the library it is in. What would be the easiest way to mitigate this issue?

Best regards, and thank you very much in advance,

Binarus

make[9]: Leaving directory '/tmp/zfs-build-root-OmINm3J0/BUILD/zfs-2.1.13/tests/zfs-tests/tests/functional/cli_root/zpool_detach'
Making all in zpool_events
make[9]: Entering directory '/tmp/zfs-build-root-OmINm3J0/BUILD/zfs-2.1.13/tests/zfs-tests/tests/functional/cli_root/zpool_events'
  CC       ereports.o
  CCLD     ereports
/usr/bin/ld: /tmp/zfs-build-root-OmINm3J0/BUILD/zfs-2.1.13/lib/libzfs/.libs/libzfs.so: undefined reference to `zfs_setproctitle'
collect2: error: ld returned 1 exit status
make[9]: *** [Makefile:834: ereports] Error 1
make[9]: Leaving directory '/tmp/zfs-build-root-OmINm3J0/BUILD/zfs-2.1.13/tests/zfs-tests/tests/functional/cli_root/zpool_events'
make[8]: *** [Makefile:825: all-recursive] Error 1
make[8]: Leaving directory '/tmp/zfs-build-root-OmINm3J0/BUILD/zfs-2.1.13/tests/zfs-tests/tests/functional/cli_root'
make[7]: *** [Makefile:736: all-recursive] Error 1
make[7]: Leaving directory '/tmp/zfs-build-root-OmINm3J0/BUILD/zfs-2.1.13/tests/zfs-tests/tests/functional'
make[6]: *** [Makefile:708: all-recursive] Error 1
make[6]: Leaving directory '/tmp/zfs-build-root-OmINm3J0/BUILD/zfs-2.1.13/tests/zfs-tests/tests'
make[5]: *** [Makefile:708: all-recursive] Error 1
make[5]: Leaving directory '/tmp/zfs-build-root-OmINm3J0/BUILD/zfs-2.1.13/tests/zfs-tests'
make[4]: *** [Makefile:713: all-recursive] Error 1
make[4]: Leaving directory '/tmp/zfs-build-root-OmINm3J0/BUILD/zfs-2.1.13/tests'
make[3]: *** [Makefile:941: all-recursive] Error 1
make[3]: Leaving directory '/tmp/zfs-build-root-OmINm3J0/BUILD/zfs-2.1.13'
make[2]: *** [Makefile:802: all] Error 2
make[2]: Leaving directory '/tmp/zfs-build-root-OmINm3J0/BUILD/zfs-2.1.13'
error: Bad exit status from /tmp/zfs-build-root-OmINm3J0/TMP/rpm-tmp.6gFhMu (%build)


RPM build errors:
    Bad exit status from /tmp/zfs-build-root-OmINm3J0/TMP/rpm-tmp.6gFhMu (%build)
make[1]: *** [Makefile:1701: rpm-common] Error 1
make[1]: Leaving directory '/install/zfs-2.1.13'
make: *** [Makefile:1660: rpm-utils-initramfs] Error 2

@AllKind
Copy link
Contributor Author

AllKind commented Oct 19, 2023

@Binarus
My first comment seems to be wrong, as I now took a closer look at deb.am.
Maybe the make deb target is broken I cannot tell for sure now.

zfs 2.1.3 does not support the native-* targets yet.
For 2.2.0 you should get native deb packages with: #15404 (comment)

But in general I think you are using the wrong guide.
https://openzfs.github.io/openzfs-docs/Developer%20Resources/Custom%20Packages.html#debian-and-ubuntu
would be the one to use.
But beware, you might run into problems when using the 'old way' (rpm to deb conversion using alien).
make -j1 deb-utils deb-dkms could lead to the dkms modules not being automatically installed, when installing zfs-dkms. Which is exactly the problem this patch is supposed to fix.

So if you want to build a 2.1.13 package, or a 2.2.0 package (the old way), avoiding the dkms install problem. I suggest using my patch ;-)

Something like that should do the trick:

git clone https://github.com/AllKind/zfs-fix_alien_deb_gen.git
cd zfs-fix_alien_deb_gen

for building 2.2.0:

git checkout zfs-2.2.0
git switch -c my_2.2.0

for building 2.1.13:

git checkout zfs-2.1.13
git switch -c my_2.1.13
git cherry-pick 537704a 
./autogen.sh
$ ./configure --enable-systemd
$ make -j1 deb-utils deb-dkms
$ sudo apt-get install --fix-missing ./*.deb

For the last step, I always copy the packages I want (i.e. without debug) installed to another folder and install from there.
It's all likely off-topic, but I hope it helps!

@AllKind
Copy link
Contributor Author

AllKind commented Oct 19, 2023

@Binarus
I just ran a test with master branch building with make deb and all went well on my system.
So... ?

@Binarus
Copy link

Binarus commented Oct 20, 2023

@AllKind Thank you very much for your effort and for your time!

In the meantime, I have managed to build a working ZFS 2.2.0 following the old way (that always had worked reliably before). I was kind of surprised by the contents of the page you linked, and of course I'll follow the procedures outlined there next time.

It's probably better to build native debian packages than "aliens". The only thing in the new guide that I don't understand is why we should do rm ../openzfs-zfs-dkms_*.deb. I'll try to understand it when I actually follow this method next time.

Although it will be off-topic again, I would like to describe why it took me more than 8 hours to finally build 2.2.0. I am convinced that there are quite a few folks out there who encounter the same problems.

The first problem were error messages that occurred when trying to install zfs-dkms. Something like 2.0 not being in a tree or so. I don't remember the exact wording. I first thought that this message was due to ZFS 2.2.0 having built broken packages or something like that.

It took me a lot of time until I figured out that theses error messages were coming from a pre-something script that tried to uninstall a previous of version of the package zfs-dkms that was broken or orphaned, respectively. It was quite hard to remove that package manually. The only thing that worked was the procedure described here: https://stackoverflow.com/questions/48431372/removing-broken-packages-in-ubuntu

After having manually removed that package, installation of the debs that had been generated during the ZFS 2.2.0 build went without issues.

But as soon as I used zpool, I got an error message saying that a symbol in zpool had another size than in a library, and that I should consider re-linking, followed by a segmentation fault. Again, I thought that this was due a failure in the build process, and rebuilt ZFS 2.2.0 more than a dozen times, each time slightly varying the method, following guides from the net. Nothing helped.

Then I remembered that several months ago I had issued make install from within the build directory after having compiled the OpenZFS version that was current at that time. make install had put the ZFS utilities into /usr/local/sbin.

But yesterday, I didn't issue make install after the OpenZFS 2.2.0 build, but only installed the debian packages the build had generated. Installing these debian packages places the ZFS utilities into /usr/sbin. Now, due to the dreaded path caching behavior of bash, each time when I executed zpool, that was /usr/local/sbin/zpool instead of /usr/sbin/zpool.

Undoubtedly, I didn't have my most brilliant time yesterday, so this was hard to find out. The solution was simple, though: cd into the build directory of the previous ZFS version I had used and issue make uninstall there. From then on, bash threw an error message each time I wanted to start zpool, complaining that it couldn't find it. hash -r finished that insanity. I now have learned my lesson, and the first thing I'll do in future with a new Linux system with bash will be to put export COMMAND_PROMPT='hash -r' into the .profile.

So I have to apologize for the noise. It was my own mistake. The build process of ZFS 2.2.0 works correctly, and so do the artifacts and debian packages that are generated. As noted above, I have only tested the classical method with the alienized packages that is described in the first link I gave. I don't have the time for further investigations because these problems plus a few new problems with ZFSBootMenu have now cost me two full days.

In general, I personally find it a bit questionable that zpool crashes with a segmentation fault when encountering library versions it hasn't been tested with, instead of checking the libraries' versions and quitting if they don't match. I even have lost a bit of data yesterday where one of my servers locked up for the first time since 4 years during a zpool operation. I could imagine that this was related to the same problem, but I really don't know.

@behlendorf behlendorf added Component: Packaging custom packages Status: Code Review Needed Ready for review and testing labels Oct 20, 2023
@AllKind
Copy link
Contributor Author

AllKind commented Oct 20, 2023

I found a problem with dnf doing upgrade.
Working on it now.
Please hold for a bit.

@AllKind
Copy link
Contributor Author

AllKind commented Oct 21, 2023

Turns out dnf has a (for me) weird behavior when doing upgrade, or reinstall.
It runs the %preun scriptlet of the old package after installing the new one.
Well, maybe ok... But:
in this case, where a: dkms remove -m %{module} -v %{version} is performed, it turns out, it's not using the %{version} of the old package, but instead the version from the new package.
Which of course lead to the state where the new version of the package was installed, but with the modules removed from dkms. Defeating the purpose of it all.

Here is an example output (the ... represent removed output):

Running transaction
  Preparing        :                                                                                                                                                                                                      1/1 
  Upgrading        : libnvpair3-2.2.0-1_g1cd914a44.fc38.x86_64                                                                                                                                                           1/16 
  Upgrading        : libuutil3-2.2.0-1_g1cd914a44.fc38.x86_64                                                                                                                                                            2/16 
  Upgrading        : libzfs5-2.2.0-1_g1cd914a44.fc38.x86_64                                                                                                                                                              3/16 
  Upgrading        : libzpool5-2.2.0-1_g1cd914a44.fc38.x86_64                                                                                                                                                            4/16 
  Running scriptlet: zfs-dkms-2.2.0-1_g1cd914a44.fc38.noarch                                                                                                                                                             5/16 
Running pre installation script: /var/tmp/rpm-tmp.8959Vw. Parameters: 2
...
  Upgrading        : zfs-dkms-2.2.0-1_g1cd914a44.fc38.noarch                                                                                                                                                             5/16 
  Running scriptlet: zfs-dkms-2.2.0-1_g1cd914a44.fc38.noarch                                                                                                                                                             5/16 
Running post installation script: /var/tmp/rpm-tmp.3rfSzr. Parameters: 2

...
  Upgrading        : zfs-2.2.0-1_g1cd914a44.fc38.x86_64                                                                                                                                                                  6/16 
  Running scriptlet: zfs-2.2.0-1_g1cd914a44.fc38.x86_64                                                                                                                                                                  6/16 
  Upgrading        : zfs-dracut-2.2.0-1_g1cd914a44.fc38.noarch                                                                                                                                                           7/16 
  Upgrading        : python3-pyzfs-2.2.0-1_g1cd914a44.fc38.noarch                                                                                                                                                        8/16 
  Cleanup          : python3-pyzfs-2.1.13-9_ge61a86ff2.fc38.noarch                                                                                                                                                       9/16 
  Cleanup          : zfs-dracut-2.1.13-9_ge61a86ff2.fc38.noarch                                                                                                                                                         10/16 
  Running scriptlet: zfs-2.1.13-9_ge61a86ff2.fc38.x86_64                                                                                                                                                                11/16 
  Cleanup          : zfs-2.1.13-9_ge61a86ff2.fc38.x86_64                                                                                                                                                                11/16 
  Running scriptlet: zfs-2.1.13-9_ge61a86ff2.fc38.x86_64                                                                                                                                                                11/16 
  Running scriptlet: zfs-dkms-2.1.13-9_ge61a86ff2.fc38.noarch                                                                                                                                                           12/16 
Running pre uninstall script: /var/tmp/rpm-tmp.fbHaVs. Parameters: 1
This is an upgrade. Skipping pre uninstall action.

As you can see in the last line, I already caught the action in my last change.

As another consequence dkms status was broken on upgrade, as described here: #15336 (which could be reported to dkms, because it's really bad error handling).

So I came up with this in %pre:

# We don't want any other versions lingering around in dkms.
# Tests with 'dnf' showed that in case of reinstall, or upgrade,
#  the preun scriptlet removed the version we are trying to install.
# Because of this, find all zfs dkms sources in /usr/src and remove
#  them, if we find a matching version in dkms.
cd /usr/src/
for x in %{module}-*; do
    if ! [ -d "$x" ]; then
        continue
    fi
    otherver=`printf "%s\n" "$x" | awk -F '-' '{ print $2 }'`
    if [ "$otherver" != %{version} ]; then
        if [ `dkms status -m %{module} -v "$otherver" | grep -c %{module}` -gt 0 ]; then
            echo "Removing old %{module} dkms modules version $otherver."
            dkms remove -m %{module} -v "$otherver" --all
        fi
    fi
done

It's safe to assume the dkms sources are in /usr/src/.
It's quite safe to assume, we are looking for subfolders named zfs-version.
Even if that isn't true and we get the wrong value into $otherver, later in the loop we check if that $otherver exist in dkms status. - Quite safe I'd say.

Other things added:
1 - Print out the scriptlets calling parameters, which will make future (problem) debugging easier.
2 - Use the --force parameter with dkms install. Which will avoid diff warnings in dkms status, if a original and binary duplicate of a module already exists.


Tests were run like this on Fedora38 and Ubuntu 22.04:

1 - Fresh install of zfs 2.1.12.
2 - Upgrade to v. 2.1.13 with my changes applied.
3 - Re-install of 2.1.13.
4 - Upgrade to 2.1.14 (just changed the META file on my patched version of 2.1.13).
5 - Downgrade to 2.1.13.
6 - Removal of 2.1.13
7 - Install of 2.1.13
8 - Upgrade to 2.2.0 with my changes applied.

Result: All checks passed!


Edit: btw, could be this also fixes #15336.
Meaning you could upgrade from a version, where dkms status is still broken. It will remove the old version from dkms and get rid of the broken state.

@AllKind
Copy link
Contributor Author

AllKind commented Oct 21, 2023

@Binarus

The only thing in the new guide that I don't understand is why we should do rm ../openzfs-zfs-dkms_*.deb

That's in the part for the pre-compiled kmod section. dkms and kmod are mutually exclusive.

It took me a lot of time until I figured out that theses error messages were coming from a pre-something script that tried to uninstall a previous of version of the package zfs-dkms that was broken or orphaned, respectively.

Most likely you hit: #15336.

Just as a note. Next time put reports like yours into the Issue tracker, not here.

@Binarus
Copy link

Binarus commented Oct 22, 2023

I now have learned my lesson, and the first thing I'll do in future with a new Linux system with bash will be to put export COMMAND_PROMPT='hash -r' into the .profile.

Of course, the correct variable is PROMPT_COMMAND, not COMMAND_PROMPT. Please excuse the typo.

@Binarus
Copy link

Binarus commented Oct 22, 2023

@AllKind Thank you very much again!

I didn't foresee that it would become an extended discussion. I thought that this would be the right place because the subject of this PR is the fix for a wrong behavior for the DEB packages and I found it appropriate to comment that this fix in its original form didn't work for me. Of course, in the future I'll follow your advice / suggestion and open a new issue instead.

It's absolutely great that you fixed it now. Since I have ZFS 2.2.0 running on two servers since yesterday, I can't try your fixes (the servers are production). But I'll upgrade ZFS to 2.2.1 as soon as it is released and open a new issue if I find any problems with it (supposing that your patches are merged until then).

Please allow a final remark for people who come across this page:

The broken DEB installation had another side effect. Even after having successfully installed ZFS 2.2.0, the DKMS build did not run when upgrading or downgrading the kernel. I could solve that problem by installing the new kernel first and then manually running the DKMS build for that kernel. The procedure roughly did go like that:

  • Make sure that /usr/src/zfs-2.2.0 exists and is correctly populated.
  • dkms add zfs/2.2.0.
  • dkms install zfs/2.2.0 -k 6.1.0-13-amd64

When building, I was on kernel 5.10.0-26-amd64, and the target kernel was 6.1.0-13-amd64.

Then depmod -a 6.1.0-13-amd64 and update-initramfs -k all -u, reboot, and voila: ZFS 2.2.0 on kernel 6.1.0-13-amd64.

Best regards,

Binarus

@AllKind
Copy link
Contributor Author

AllKind commented Oct 22, 2023

Short summary of my last push:

1 - Changed the location to retrieve the old zfs versions to /var/lib/dkms/%{module}.
2 - Re-introduced the dkms add/remove parameter --rpm_safe_upgrade.
3 - Added checking and handling of the broken dkms status.
4 - Added ||: after every command to ensure a zero exit status.

Testing:

Again I did extensive testing.
This time I did not test with 2.2.0, but we already know from before, that it's all the same.
I created a gist with a complete log of what I did: https://gist.github.com/AllKind/044ef529ff6ae86f09410b1d2ad8ef63

Long version:

1 - Changed the location to retrieve the old zfs versions to /var/lib/dkms/%{module}.
After thinking it over, I came to the conclusion, better to use this path. Because:
a: Less likely to have a bunch of other folders, which could match the glob.
b: Limits the loop to only the versions dkms knows about.
c: If dkms status is broken due to the missing source in /usr/src, it won't catch that.

If someone has a better suggestion for a glob pattern, than for x in [[:digit:]]*; do, I'm happy to hear it.

2 - Re-introduced the dkms add/remove parameter --rpm_safe_upgrade.
The dkms man page states:

   --rpm_safe_upgrade
          This flag should be used when packaging DKMS enabled modules in RPMs.  It should be specified during both the add and remove actions in the RPM spec to ensure that DKMS and RPM behave correctly in all scenarios when up-
          grading between various versions of a dkms enabled module RPM package.  See the sample.spec file for an example or read more in the section below on Creating RPMs Which Utilize DKMS.

CREATING RPMS WHICH UTILIZE DKMS
See the sample.spec file packaged with DKMS as an example for what your RPM spec file might look like. Creating RPMs which utilize dkms is a fairly straight-forward process. The RPM need only to install the source into
/usr/src/-/ and then employ dkms itself to do all the work of installation. As such, the RPM should first untar the source into this directory. From here, within the RPM .spec file, a dkms add should
be called (remember to use the --rpm_safe_upgrade flag during the add) followed by a dkms build followed by a dkms install. Your dkms.conf file should be placed within the /usr/src/-/ directory.

   Under  the removal parts of the .spec file, all that needs to be called is a: dkms remove -m <module> -v <module-version> --all --rpm_safe_upgrade.  Use of the --rpm_safe_upgrade flag is imperative for making sure DKMS and RPM
   play nicely together in all scenarios of using the -Uvh flag with RPM to upgrade dkms enabled packages.  It will only function if used during both the add and remove actions within the same RPM spec file. Its  use  makes  sure
   that when upgrading between different releases of an RPM for the same <module-version>, DKMS does not do anything dumb (eg. it ensures a smooth upgrade from megaraid-2.09-5.noarch.rpm to megaraid-2.09-6.noarch.rpm).

--rpm_safe_upgrade was used before zfs-dkms 2.1.13 (see: https://github.com/openzfs/zfs/blob/zfs-2.1.12/rpm/generic/zfs-dkms.spec.in). But it also used /usr/lib/dkms/common.postinst in %post, which itself does not utilize this parameter.

3 - Added checking and handling of the broken dkms status.
As the issue tracker shows, re-installing / upgrading / removing zfs 2.1.13 and 2.2.0 left quite some people with broken installations. Of which quite a bunch, I'm sure, gave up frustrated.
I'm quite aware, that adding such code is very much discuss-able.
The reasons I decided to do it were:
a: Following the principle: Clean up the mess you made.
b: With it, we can fix broken installations. Warn the user about it. Give advice what to do.

You can see how successful it is in the gist I created.
The only condition where it does not work is, if dkms status is broken and the user does a package-manager remove zfs-dkms.
But I did not find a good solution for that, unless replicating all the code that dkms would run. Which is of course nonsense.
Instead I choose to inform the user.
Actually I think a re-install and then remove would resolve the situation. Which is something many people would try anyways.

c: It does not do any harm. If the dkms status for that exact version of zfs is not broken, nothing is done.

4 - Added ||: after every command to ensure a zero exit status.
I introduced this change because https://docs.fedoraproject.org/en-US/packaging-guidelines/Scriptlets/#_syntax states:

All scriptlets MUST exit with the zero exit status. Because RPM in its default configuration does not execute shell scriptlets with the -e argument to the shell, excluding explicit exit calls (frowned upon with a non-zero argument!), the exit status of the last command in a scriptlet determines its exit status. Most commands in the snippets in this document have a “|| :” appended to them, which is a generic trick to force the zero exit status for those commands whether they worked or not. Usually the most important bit is to apply this to the last command executed in a scriptlet, or to add a separate command such as plain “:” or “exit 0” as the last one in a scriptlet. Note that depending on the case, other error checking/prevention measures may be more appropriate.

Non-zero exit codes from scriptlets can break installs/upgrades/erases such that no further actions will be taken for that package in a transaction (see Ordering), which may for example prevent an old version of a package from being erased on upgrades, leaving behind duplicate rpmdb entries and possibly stale, unowned files on the filesystem. There are some cases where letting the transaction to proceed when some things in scriptlets failed may result in partially broken setup. It is however often limited to that package only whereas letting a transaction to proceed with some packages dropped out on the fly is more likely to result in broader system wide problems.

Endnote

After now 4 complete days of reading and debugging, I'm pretty sure I covered it all. (But LOL - you never know what you don't know)
In 3 days I will be gone for 10 days. Any change you want me to do, can only be done the next 2 days (where I have limited time).
Ah, almost forgot: I used the old backtick `command` substitution syntax, because at the beginning I though "You never know which minimal shell this is going to run on". Then I noticed dkms itself uses it and while debugging on fedora, I also saw a dependency on bash. So if you want me to change that to $(command) syntax - no problem.

Have a nice day!

@Binarus
Copy link

Binarus commented Oct 23, 2023

@AllKind

Just wanted to say a big thanks again for all your efforts and your time. I'll test it when OpenZFS 2.2.1 is available and report back. I'm hoping that the reason for your absence is a pleasant one!

Best regards,

Binarus

@AllKind
Copy link
Contributor Author

AllKind commented Oct 24, 2023

Just added some comments and rebased.

@lancethepants
Copy link

Thanks for this patch, got 2.2.0 running on my Debian 10 Server.

@evelikov
Copy link

Fwiw the amount of workarounds required makes me feel uneasy. Although that's a me issue 😅

I would kindly ask is that we get to the bottom what caused this in the first place. There are seemingly 2-3 different PRs and another 2-3 issues that discuss that topic. And so far I'm failing to see one that summarises the root/cause of the problem.

If that's a dkms bug, please open an issue/PR and cc me. Thanks in advance.

@AllKind
Copy link
Contributor Author

AllKind commented Oct 24, 2023

@evelikov
I gave you a short description of what happend here (I know you know ;-) ) dell/dkms#357.
But it's a good idea to bring it here.

For (I guess) a long time this was the rpm .spec: https://github.com/openzfs/zfs/blob/zfs-2.1.12/rpm/generic/zfs-dkms.spec.in

As the author of this PR #13182 pointed out:

Two problems led to unexpected behaviour of the scriptlets:

Newer DKMS versions change the formatting of "dkms status":

(old) zfs, 2.1.2, 5.14.10-300.fc35.x86_64, x86_64: installed
(new) zfs/2.1.2, 5.14.10-300.fc35.x86_64, x86_64: installed

Which broke a conditional determining whether to uninstall.

zfs_config.h not packaged properly, but was attempted to be read
in the %preun scriptlet:

CONFIG_H="/var/lib/dkms/zfs/2.1.2/*/*/zfs_config.h"

Which broke the uninstallation of the module, which left behind a
dangling symlink, which broke DKMS entirely with this error:

Error! Could not locate dkms.conf file.
File: /var/lib/dkms/zfs/2.1.1/source/dkms.conf does not exist.

But the introduction of the %prettrans hook introduced the problem, that alien did not create a scriptlet in debian packages, which means /usr/lib/dkms/common.postinst was never called at any point.

Edit: the dangling symlink problem definitely came from the broken scriptlet, which never removed it from dkms tree, while the package itself (meaning the sources in /usr/src) was removed.

We don't have enough information (and I don't think it's worth the time) to untangle each possible combination of the above mentioned problems, which lead to the 3 different issues that were reported.

Important is, that my PR fixes it all ;-)

Alien does not honour the %posttrans hook.
So move the dkms uninstall/install scripts to the
 %pre/%post hooks in case of package install/upgrade.
In case of package removal, handle that in %preun.
Add removal of all old dkms modules.
Add checking for broken 'dkms status'. Handle that as
good as possible and warn the user about it.
Also add more verbose messages about what we are doing.

Signed-off-by: Mart Frauenlob <[email protected]>
Copy link
Contributor

@behlendorf behlendorf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for working through this and carefully testing it. Based on the positive feedback I'm going to pull this in.

@behlendorf behlendorf added Status: Accepted Ready to integrate (reviewed, tested) and removed Status: Code Review Needed Ready for review and testing labels Nov 7, 2023
@behlendorf behlendorf merged commit 9ce567c into openzfs:master Nov 7, 2023
23 of 25 checks passed
ixhamza pushed a commit to truenas/zfs that referenced this pull request Nov 20, 2023
Alien does not honour the %posttrans hook.
So move the dkms uninstall/install scripts to the
 %pre/%post hooks in case of package install/upgrade.
In case of package removal, handle that in %preun.
Add removal of all old dkms modules.
Add checking for broken 'dkms status'. Handle that as
good as possible and warn the user about it.
Also add more verbose messages about what we are doing.

Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Mart Frauenlob <[email protected]>
Closes openzfs#15415
jcferretti pushed a commit to jcferretti/zfs that referenced this pull request Nov 26, 2023
Alien does not honour the %posttrans hook.
So move the dkms uninstall/install scripts to the
 %pre/%post hooks in case of package install/upgrade.
In case of package removal, handle that in %preun.
Add removal of all old dkms modules.
Add checking for broken 'dkms status'. Handle that as
good as possible and warn the user about it.
Also add more verbose messages about what we are doing.

Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Mart Frauenlob <[email protected]>
Closes openzfs#15415
mumbleskates pushed a commit to mumbleskates/zfs that referenced this pull request Dec 1, 2023
Alien does not honour the %posttrans hook.
So move the dkms uninstall/install scripts to the
 %pre/%post hooks in case of package install/upgrade.
In case of package removal, handle that in %preun.
Add removal of all old dkms modules.
Add checking for broken 'dkms status'. Handle that as
good as possible and warn the user about it.
Also add more verbose messages about what we are doing.

Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Mart Frauenlob <[email protected]>
Closes openzfs#15415
(cherry picked from commit 9ce567c)
behlendorf pushed a commit that referenced this pull request Dec 5, 2023
Alien does not honour the %posttrans hook.
So move the dkms uninstall/install scripts to the
 %pre/%post hooks in case of package install/upgrade.
In case of package removal, handle that in %preun.
Add removal of all old dkms modules.
Add checking for broken 'dkms status'. Handle that as
good as possible and warn the user about it.
Also add more verbose messages about what we are doing.

Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Mart Frauenlob <[email protected]>
Closes #15415
(cherry picked from commit 9ce567c)
@DurvalMenezes
Copy link

DurvalMenezes commented Dec 8, 2023

FTR: this commit applies perfectly (via patch -p1) on top of zfs-2.1.14, but to no avail: DKMS remains broken in the packages resulting from make deb.

EDIT: The above was with intermediate commit #0e92318; with the last one listed above (#9ce567c) it seems to have worked. Will know after my next reboot :-) apologies for the initially false report.

lundman pushed a commit to openzfsonwindows/openzfs that referenced this pull request Dec 12, 2023
Alien does not honour the %posttrans hook.
So move the dkms uninstall/install scripts to the
 %pre/%post hooks in case of package install/upgrade.
In case of package removal, handle that in %preun.
Add removal of all old dkms modules.
Add checking for broken 'dkms status'. Handle that as
good as possible and warn the user about it.
Also add more verbose messages about what we are doing.

Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Mart Frauenlob <[email protected]>
Closes openzfs#15415
@JKDingwall
Copy link
Contributor

I'm still finding that 2.1.15 has regressed compared to previous 2.1.x versions with regard to the dkms install. In a chroot environment the module does not build:

Setting up zfs-dkms (2.1.15-1) ...
Running post installation script: /var/lib/dpkg/info/zfs-dkms.postinst. Parameters: configure
Adding zfs dkms modules version 2.1.15 to dkms.

Creating symlink /var/lib/dkms/zfs/2.1.15/source ->
                 /usr/src/zfs-2.1.15

DKMS: add completed.
Installing zfs dkms modules version 2.1.15 for the current kernel.
Error! Your kernel headers for kernel 5.4.0-150-generic cannot be found.
Please install the linux-headers-5.4.0-150-generic package,
or use the --kernelsourcedir option to tell DKMS where it's located

In 2.1.14 with c5bbd80 reverted the install works and the zfs-dkms.postinst script looks like:

#!/bin/bash
for POSTINST in /usr/lib/dkms/common.postinst; do
    if [ -f $POSTINST ]; then
        $POSTINST zfs 2.1.14
        exit $?
    fi
    echo "WARNING: $POSTINST does not exist."
done
echo -e "ERROR: DKMS version is too old and zfs was not"
echo -e "built with legacy DKMS support."
echo -e "You must either rebuild zfs with legacy postinst"
echo -e "support or upgrade DKMS to a more current version."
exit 1

In 2.1.15:

#!/bin/bash
echo "Running post installation script: $0. Parameters: $*"
# Add the module to dkms, as reccommended in the dkms man page.
# This is generally rpm specfic.
# But this also may help, if we have a broken 'dkms status'.
# Because, if the sources are available and only the symlink pointing
#  to them is missing, this will resolve the situation
echo "Adding zfs dkms modules version 2.1.15 to dkms."
dkms add -m zfs -v 2.1.15  ||:

# After installing the package, dkms install this zfs version for the current kernel.
# Force the overwriting of old modules to avoid diff warnings in dkms status.
# Or in case of a downgrade to overwrite newer versions.
# Or if some other backed up versions have been restored before.
echo "Installing zfs dkms modules version 2.1.15 for the current kernel."
dkms install --force -m zfs -v 2.1.15 ||:

I can get back to a working situation for my case by reverting:

@AllKind
Copy link
Contributor Author

AllKind commented Feb 28, 2024

First off, dkms does not seem to be chroot save at all: dell/dkms#145
The postinst script is in discussion to be removed: dell/dkms#257 (comment) dell/dkms#339
That's why I thought better not to rely on it. It also has other issues, that are not really being worked on. Also the code becomes simpler by just using dkms actions.

Error! Your kernel headers for kernel 5.4.0-150-generic cannot be found.

So dkms cannot find the kernel headers... Wouldn't the easiest solution be, to provide it with an environment where they can be found? I think usually /lib/modules/KERNEL_VERSION/build being a symbolic link to them.

@AllKind
Copy link
Contributor Author

AllKind commented Feb 28, 2024

I took a quick look at the postinst from dkms and at first sight the only difference seems to be, that it passes the kernel version to the dkms install command.

    if [ $(echo $dkms_status | grep -c ": built") -eq 0 ]; then
        if [ ! -L /var/lib/dkms/$NAME/$VERSION/source ]; then
            echo "This package appears to be a binaries-only package"
            echo " you will not be able to build against kernel $KERNEL"
            echo " since the package source was not provided"
            continue
        fi
        if _check_kernel_dir $KERNEL; then
            echo "Building initial module for $KERNEL"
            set +e
            dkms build -m $NAME -v $VERSION -k $KERNEL $ARCH > /dev/null
            case $? in
            77)
                set -e
                echo "Skipped."
                continue
                ;;
            0)
                set -e
                echo "Done."
                ;;
            *)
                exit $?
                ;;
            esac
            dkms_status=$(dkms status -m $NAME -v $VERSION -k $KERNEL $ARCH)
        else
            echo "Module build for kernel $KERNEL was skipped since the"
            echo "kernel headers for this kernel do not seem to be installed."
        fi
    fi

    #if the module is built (either pre-built or just now), install it
    if [ $(echo $dkms_status | grep -c ": built") -eq 1 ] &&
       [ $(echo $dkms_status | grep -c ": installed") -eq 0 ]; then
        dkms install -m $NAME -v $VERSION -k $KERNEL $ARCH
    fi

@JKDingwall
Copy link
Contributor

First off, dkms does not seem to be chroot save at all: dell/dkms#145 The postinst script is in discussion to be removed: dell/dkms#257 (comment) dell/dkms#339 That's why I thought better not to rely on it. It also has other issues, that are not really being worked on. Also the code becomes simpler by just using dkms actions.

Error! Your kernel headers for kernel 5.4.0-150-generic cannot be found.

So dkms cannot find the kernel headers... Wouldn't the easiest solution be, to provide it with an environment where they can be found? I think usually /lib/modules/KERNEL_VERSION/build being a symbolic link to them.

The chroot environment is an offline build of a virtual machine disk image and uses a different kernel from the host so installing those headers is not the solution. Other software using dkms does not have this problem and until recent 2.1.x versions zfs did not have an issue. We've been building this way since 0.6.x releases.

The common.postinst has handling exactly for the situation that the environment is a chroot:

if [ -z "$autoinstall_all_kernels" ]; then
    # If the current kernel is installed on the system or chroot
    if [ `_is_kernel_name_correct $CURRENT_KERNEL` = "yes" ]; then
        if [ -n "$NEWEST_KERNEL" ] && [ ${CURRENT_KERNEL} != ${NEWEST_KERNEL} ]; then
            KERNELS="$CURRENT_KERNEL $NEWEST_KERNEL"
        else
            KERNELS=$CURRENT_KERNEL
        fi
    # The current kernel is not useful as it's not installed
    else
        echo "It is likely that $CURRENT_KERNEL belongs to a chroot's host"

        # Let's use only the newest kernel if this is not a first installation
        # otherwise build for all kernels
        if [ -n "$NEWEST_KERNEL" -a -n "$UPGRADE" ]; then
            KERNELS="$NEWEST_KERNEL"
        fi
    fi
fi

@AllKind
Copy link
Contributor Author

AllKind commented Feb 29, 2024

Bit of a bummer I wasn't aware of that problem. My patch was around for months and was applied to 3 ZFS versions before.
But it is how it is now.
The easiest solution I see is to reintroduce the %post scriptlet from before 2.1.4.
Downside is, we rely on the dkms postinst again.

Another option I thought of is to get kernel version and kernel source from configure.
While --with-linux-obj would fit to pass it to dmks with --kernelsourcedir, I don't think --with-linux would be fit to be passed to dkms with -k. Or am I wrong about that?

@AllKind
Copy link
Contributor Author

AllKind commented Mar 5, 2024

  1. As you are most likely aware, but maybe also for others, a quick workaround (that does not involve reverting commits) would be to just run dkms install zfs/2.1.15 -k KERNEL_VERSION manually after the package installation.
  2. I wonder if in general kmods wouldn't have been the better fit in the first place. Quoting the wiki:

kmod

The key thing to know when building a kmod package is that a specific Linux kernel must be specified. At configure time the build system will make an educated guess as to which kernel you want to build against. However, if configure is unable to locate your kernel development headers, or you want to build against a different kernel, you must specify the exact path with the –with-linux and –with-linux-obj options.

  1. Nonetheless I cooked something up:
    3.1 You can export environment variables for kernel version and kernel source (KVERS and KSRC).
    3.2 If the above method is not used a regular install (current kernel) is attempted. If that fails it will fall back to use dkms postinst.

I didn't have time to set up a testing VM yet and don't know enough about your (or in general) chroot setup, so I ask you to test before I create a PR.
Just replace the %post scriptlet hook in rpm/generic/zfs-dkms.in with the code below.

%post
echo "Running post installation script: $0. Parameters: $*"

# Add the module to dkms, as reccommended in the dkms man page.
# This is generally rpm specfic.
# But this also may help, if we have a broken 'dkms status'.
# Because, if the sources are available and only the symlink pointing
#  to them is missing, this will resolve the situation
echo "Adding %{module} dkms modules version %{version} to dkms."
dkms add -m %{module} -v %{version} %{!?not_rpm:--rpm_safe_upgrade} ||:

# After installing the package, dkms install this zfs version for the current,
#  or specified kernel.
# Force the overwriting of old modules to avoid diff warnings in dkms status.
# Or in case of a downgrade to overwrite newer versions.
# Or if some other backed up versions have been restored before.
echo "Installing %{module} dkms modules version %{version} for kernel $KVER."

# Use environment variable(s) to specify kernel version and optional
#  its source directory. This is useful for chroot.
if [ -n "$KVERS" ]; then
    if [ -n "$KSRC" ]; then
        echo "Using source directory $KSRC."
        dkms install --force -m %{module} -v %{version} -k "$KVERS" --kernelsourcedir "$KSRC" ||:
    else
        dkms install --force -m %{module} -v %{version} -k "$KVERS" ||:
    fi
    # Nothing more to do, as a specific kernel was given.
    exit 0
fi

# Use current kernel
dkms install --force -m %{module} -v %{version} ||:

# If installation was not successful (maybe due to chroot)
#  fall back to legacy postinst.
# This will try to install to all kernels, if autoinstall is enabled.
if [ `dkms status -m %{module} -v %{version} | grep -c ": installed$"` -eq 0 ]; then
    echo "Installation unsuccessful, trying legacy postinst"
    # Remove previously added, to avoid double add.
    dkms remove -m %{module} -v %{version} %{!?not_rpm:--rpm_safe_upgrade} >/dev/null ||:
    for POSTINST in /usr/lib/dkms/common.postinst; do
        if [ -f $POSTINST ]; then
            $POSTINST %{module} %{version} && exit 0 ||:
        fi
        echo "WARNING: $POSTINST does not exist."
    done
    echo -e "ERROR: DKMS version is too old and %{module} was not"
    echo -e "built with legacy DKMS support."
    echo -e "You must either rebuild %{module} with legacy postinst"
    echo -e "support, or upgrade DKMS to a more current version."
fi

@JKDingwall
Copy link
Contributor

Hi - just acknowledge I've seen the proposed change but I need to find some time for testing.

@AllKind
Copy link
Contributor Author

AllKind commented Mar 16, 2024

I just noticed that instead of:

if [ `dkms status -m %{module} -v %{version} | grep -c ": installed$"` -eq 0 ]; then

if [ `dkms status -m %{module} -v %{version} -k $(uname -r) | grep -c ": installed$"` -eq 0 ]; then

probably would be better

@JKDingwall
Copy link
Contributor

Hi,

With the patch (and the one line change above) the postinst was generated as:

#!/bin/bash
echo "Running post installation script: $0. Parameters: $*"

# Add the module to dkms, as reccommended in the dkms man page.
# This is generally rpm specfic.
# But this also may help, if we have a broken 'dkms status'.
# Because, if the sources are available and only the symlink pointing
#  to them is missing, this will resolve the situation
echo "Adding zfs dkms modules version 2.1.15 to dkms."
dkms add -m zfs -v 2.1.15  ||:

# After installing the package, dkms install this zfs version for the current,
#  or specified kernel.
# Force the overwriting of old modules to avoid diff warnings in dkms status.
# Or in case of a downgrade to overwrite newer versions.
# Or if some other backed up versions have been restored before.
echo "Installing zfs dkms modules version 2.1.15 for kernel $KVER."

# Use environment variable(s) to specify kernel version and optional
#  its source directory. This is useful for chroot.
if [ -n "$KVERS" ]; then
    if [ -n "$KSRC" ]; then
        echo "Using source directory $KSRC."
        dkms install --force -m zfs -v 2.1.15 -k "$KVERS" --kernelsourcedir "$KSRC" ||:
    else
        dkms install --force -m zfs -v 2.1.15 -k "$KVERS" ||:
    fi
    # Nothing more to do, as a specific kernel was given.
    exit 0
fi

# Use current kernel
dkms install --force -m zfs -v 2.1.15 ||:

# If installation was not successful (maybe due to chroot)
#  fall back to legacy postinst.
# This will try to install to all kernels, if autoinstall is enabled.
if [ `dkms status -m zfs -v 2.1.15 -k $(uname -r) | grep -c ": installed$"` -eq 0 ]; then
    echo "Installation unsuccessful, trying legacy postinst"
    # Remove previously added, to avoid double add.
    dkms remove -m zfs -v 2.1.15  >/dev/null ||:
    for POSTINST in /usr/lib/dkms/common.postinst; do
        if [ -f $POSTINST ]; then
            $POSTINST zfs 2.1.15 && exit 0 ||:
        fi
        echo "WARNING: $POSTINST does not exist."
    done
    echo -e "ERROR: DKMS version is too old and zfs was not"
    echo -e "built with legacy DKMS support."
    echo -e "You must either rebuild zfs with legacy postinst"
    echo -e "support, or upgrade DKMS to a more current version."

This resulted in the module being built as expected in the chroot vm build.

@AllKind
Copy link
Contributor Author

AllKind commented Mar 20, 2024

That's good news!
I assume you just ran it as is... and did not use the environment variables KVERS and KSRC?
Please (as I cannot test it with your chroot environment) could you do just a short test if that works?
1: Just define KVERS.
2: Define KVERS and KSRC.

That would be of tremendous help! Thank you!

@JKDingwall
Copy link
Contributor

Correct, I did not set KVERS or KSRC. I'll have to examine the build pipeline to see where the variables can be set. I'll update once I've worked it out...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Component: Packaging custom packages Status: Accepted Ready to integrate (reviewed, tested)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

zfs-dkms 2.1.13 doesn't build and install module automatically
7 participants